00:00:00.001 Started by upstream project "autotest-per-patch" build number 127177 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.123 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.184 Using shallow fetch with depth 1 00:00:00.184 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.205 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.218 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.230 Checking out Revision 42e00731b22fe9a8063e4b475dece9d4b345521a (FETCH_HEAD) 00:00:05.230 > git config core.sparsecheckout # timeout=10 00:00:05.242 > git read-tree -mu HEAD # timeout=10 00:00:05.258 > git checkout -f 42e00731b22fe9a8063e4b475dece9d4b345521a # timeout=5 00:00:05.275 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for docker-autotest jobs" 00:00:05.275 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:05.361 [Pipeline] Start of Pipeline 00:00:05.373 [Pipeline] library 00:00:05.375 Loading library shm_lib@master 00:00:05.375 Library shm_lib@master is cached. Copying from home. 00:00:05.393 [Pipeline] node 00:00:05.402 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.403 [Pipeline] { 00:00:05.415 [Pipeline] catchError 00:00:05.417 [Pipeline] { 00:00:05.430 [Pipeline] wrap 00:00:05.443 [Pipeline] { 00:00:05.452 [Pipeline] stage 00:00:05.454 [Pipeline] { (Prologue) 00:00:05.630 [Pipeline] sh 00:00:05.916 + logger -p user.info -t JENKINS-CI 00:00:05.937 [Pipeline] echo 00:00:05.938 Node: WFP8 00:00:05.943 [Pipeline] sh 00:00:06.242 [Pipeline] setCustomBuildProperty 00:00:06.251 [Pipeline] echo 00:00:06.252 Cleanup processes 00:00:06.257 [Pipeline] sh 00:00:06.537 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.537 2036337 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.550 [Pipeline] sh 00:00:06.832 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.832 ++ grep -v 'sudo pgrep' 00:00:06.832 ++ awk '{print $1}' 00:00:06.832 + sudo kill -9 00:00:06.832 + true 00:00:06.847 [Pipeline] cleanWs 00:00:06.857 [WS-CLEANUP] Deleting project workspace... 00:00:06.857 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.863 [WS-CLEANUP] done 00:00:06.867 [Pipeline] setCustomBuildProperty 00:00:06.880 [Pipeline] sh 00:00:07.158 + sudo git config --global --replace-all safe.directory '*' 00:00:07.223 [Pipeline] httpRequest 00:00:07.255 [Pipeline] echo 00:00:07.256 Sorcerer 10.211.164.101 is alive 00:00:07.264 [Pipeline] httpRequest 00:00:07.269 HttpMethod: GET 00:00:07.269 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.270 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:07.283 Response Code: HTTP/1.1 200 OK 00:00:07.284 Success: Status code 200 is in the accepted range: 200,404 00:00:07.284 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.313 [Pipeline] sh 00:00:10.595 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.611 [Pipeline] httpRequest 00:00:10.630 [Pipeline] echo 00:00:10.632 Sorcerer 10.211.164.101 is alive 00:00:10.641 [Pipeline] httpRequest 00:00:10.646 HttpMethod: GET 00:00:10.647 URL: http://10.211.164.101/packages/spdk_e7b60083527e112032c8d9998b791dd442e161c9.tar.gz 00:00:10.647 Sending request to url: http://10.211.164.101/packages/spdk_e7b60083527e112032c8d9998b791dd442e161c9.tar.gz 00:00:10.652 Response Code: HTTP/1.1 200 OK 00:00:10.653 Success: Status code 200 is in the accepted range: 200,404 00:00:10.654 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e7b60083527e112032c8d9998b791dd442e161c9.tar.gz 00:01:09.069 [Pipeline] sh 00:01:09.354 + tar --no-same-owner -xf spdk_e7b60083527e112032c8d9998b791dd442e161c9.tar.gz 00:01:11.903 [Pipeline] sh 00:01:12.188 + git -C spdk log --oneline -n5 00:01:12.188 e7b600835 raid5f: DIF/DIX implementation and tests for RAID5f 00:01:12.188 245cf1e7a raid1: DIF/DIX implementation and tests for RAID1 00:01:12.188 fdb617083 raid0: DIF/DIX implementation and tests for RAID0 00:01:12.188 3c25cfe1d raid: Generic changes to support DIF/DIX for RAID 00:01:12.188 0e983c564 nvmf/tcp: use sock group polling for the listening sockets 00:01:12.200 [Pipeline] } 00:01:12.216 [Pipeline] // stage 00:01:12.226 [Pipeline] stage 00:01:12.229 [Pipeline] { (Prepare) 00:01:12.244 [Pipeline] writeFile 00:01:12.257 [Pipeline] sh 00:01:12.538 + logger -p user.info -t JENKINS-CI 00:01:12.551 [Pipeline] sh 00:01:12.836 + logger -p user.info -t JENKINS-CI 00:01:12.849 [Pipeline] sh 00:01:13.134 + cat autorun-spdk.conf 00:01:13.134 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.134 SPDK_TEST_NVMF=1 00:01:13.134 SPDK_TEST_NVME_CLI=1 00:01:13.134 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.134 SPDK_TEST_NVMF_NICS=e810 00:01:13.134 SPDK_TEST_VFIOUSER=1 00:01:13.134 SPDK_RUN_UBSAN=1 00:01:13.134 NET_TYPE=phy 00:01:13.142 RUN_NIGHTLY=0 00:01:13.146 [Pipeline] readFile 00:01:13.204 [Pipeline] withEnv 00:01:13.206 [Pipeline] { 00:01:13.220 [Pipeline] sh 00:01:13.526 + set -ex 00:01:13.526 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.526 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.526 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.526 ++ SPDK_TEST_NVMF=1 00:01:13.526 ++ SPDK_TEST_NVME_CLI=1 00:01:13.526 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.526 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.526 ++ SPDK_TEST_VFIOUSER=1 00:01:13.526 ++ SPDK_RUN_UBSAN=1 00:01:13.526 ++ NET_TYPE=phy 00:01:13.526 ++ RUN_NIGHTLY=0 00:01:13.526 + case $SPDK_TEST_NVMF_NICS in 00:01:13.526 + DRIVERS=ice 00:01:13.526 + [[ tcp == \r\d\m\a ]] 00:01:13.526 + [[ -n ice ]] 00:01:13.526 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.526 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.526 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:13.526 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.526 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.526 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.526 + true 00:01:13.526 + for D in $DRIVERS 00:01:13.526 + sudo modprobe ice 00:01:13.526 + exit 0 00:01:13.536 [Pipeline] } 00:01:13.554 [Pipeline] // withEnv 00:01:13.560 [Pipeline] } 00:01:13.577 [Pipeline] // stage 00:01:13.588 [Pipeline] catchError 00:01:13.590 [Pipeline] { 00:01:13.609 [Pipeline] timeout 00:01:13.609 Timeout set to expire in 50 min 00:01:13.612 [Pipeline] { 00:01:13.627 [Pipeline] stage 00:01:13.629 [Pipeline] { (Tests) 00:01:13.640 [Pipeline] sh 00:01:13.922 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.922 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.922 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.922 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.922 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.922 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.922 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.922 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.922 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.922 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.922 + source /etc/os-release 00:01:13.922 ++ NAME='Fedora Linux' 00:01:13.922 ++ VERSION='38 (Cloud Edition)' 00:01:13.922 ++ ID=fedora 00:01:13.922 ++ VERSION_ID=38 00:01:13.922 ++ VERSION_CODENAME= 00:01:13.922 ++ PLATFORM_ID=platform:f38 00:01:13.922 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.922 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.922 ++ LOGO=fedora-logo-icon 00:01:13.922 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.922 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.922 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.922 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.922 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.922 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.922 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.922 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.922 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.922 ++ SUPPORT_END=2024-05-14 00:01:13.922 ++ VARIANT='Cloud Edition' 00:01:13.922 ++ VARIANT_ID=cloud 00:01:13.922 + uname -a 00:01:13.922 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.922 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:16.459 Hugepages 00:01:16.459 node hugesize free / total 00:01:16.459 node0 1048576kB 0 / 0 00:01:16.459 node0 2048kB 0 / 0 00:01:16.459 node1 1048576kB 0 / 0 00:01:16.459 node1 2048kB 0 / 0 00:01:16.459 00:01:16.459 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.459 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:16.459 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:16.459 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:16.460 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:16.460 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:16.460 + rm -f /tmp/spdk-ld-path 00:01:16.460 + source autorun-spdk.conf 00:01:16.460 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.460 ++ SPDK_TEST_NVMF=1 00:01:16.460 ++ SPDK_TEST_NVME_CLI=1 00:01:16.460 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.460 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.460 ++ SPDK_TEST_VFIOUSER=1 00:01:16.460 ++ SPDK_RUN_UBSAN=1 00:01:16.460 ++ NET_TYPE=phy 00:01:16.460 ++ RUN_NIGHTLY=0 00:01:16.460 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.460 + [[ -n '' ]] 00:01:16.460 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.460 + for M in /var/spdk/build-*-manifest.txt 00:01:16.460 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.460 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.460 + for M in /var/spdk/build-*-manifest.txt 00:01:16.460 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.460 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.460 ++ uname 00:01:16.460 + [[ Linux == \L\i\n\u\x ]] 00:01:16.460 + sudo dmesg -T 00:01:16.460 + sudo dmesg --clear 00:01:16.460 + dmesg_pid=2037258 00:01:16.460 + [[ Fedora Linux == FreeBSD ]] 00:01:16.460 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.460 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.460 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.460 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.460 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.460 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.460 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.460 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.460 + sudo dmesg -Tw 00:01:16.460 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.460 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.460 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.460 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.460 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.460 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.460 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.460 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.460 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.460 Test configuration: 00:01:16.460 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.460 SPDK_TEST_NVMF=1 00:01:16.460 SPDK_TEST_NVME_CLI=1 00:01:16.460 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.460 SPDK_TEST_NVMF_NICS=e810 00:01:16.460 SPDK_TEST_VFIOUSER=1 00:01:16.460 SPDK_RUN_UBSAN=1 00:01:16.460 NET_TYPE=phy 00:01:16.720 RUN_NIGHTLY=0 14:28:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.720 14:28:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.720 14:28:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.720 14:28:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.720 14:28:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.720 14:28:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.720 14:28:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.720 14:28:36 -- paths/export.sh@5 -- $ export PATH 00:01:16.720 14:28:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.720 14:28:36 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.720 14:28:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:16.720 14:28:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721910516.XXXXXX 00:01:16.720 14:28:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721910516.98XO7n 00:01:16.720 14:28:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:16.720 14:28:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:16.720 14:28:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.720 14:28:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.720 14:28:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.720 14:28:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:16.720 14:28:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:16.720 14:28:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.720 14:28:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.720 14:28:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:16.720 14:28:36 -- pm/common@17 -- $ local monitor 00:01:16.720 14:28:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.720 14:28:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.720 14:28:36 -- pm/common@21 -- $ date +%s 00:01:16.720 14:28:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.720 14:28:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.720 14:28:36 -- pm/common@21 -- $ date +%s 00:01:16.720 14:28:36 -- pm/common@25 -- $ sleep 1 00:01:16.720 14:28:36 -- pm/common@21 -- $ date +%s 00:01:16.720 14:28:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721910516 00:01:16.720 14:28:36 -- pm/common@21 -- $ date +%s 00:01:16.720 14:28:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721910516 00:01:16.720 14:28:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721910516 00:01:16.720 14:28:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721910516 00:01:16.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721910516_collect-vmstat.pm.log 00:01:16.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721910516_collect-cpu-load.pm.log 00:01:16.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721910516_collect-cpu-temp.pm.log 00:01:16.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721910516_collect-bmc-pm.bmc.pm.log 00:01:17.659 14:28:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:17.660 14:28:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.660 14:28:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.660 14:28:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.660 14:28:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.660 Thu Jul 25 12:28:37 PM UTC 2024 00:01:17.660 14:28:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.660 v24.09-pre-227-ge7b600835 00:01:17.660 14:28:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.660 14:28:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.660 14:28:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.660 14:28:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.660 14:28:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.660 14:28:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.660 ************************************ 00:01:17.660 START TEST ubsan 00:01:17.660 ************************************ 00:01:17.660 14:28:37 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:17.660 using ubsan 00:01:17.660 00:01:17.660 real 0m0.000s 00:01:17.660 user 0m0.000s 00:01:17.660 sys 0m0.000s 00:01:17.660 14:28:37 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.660 14:28:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.660 ************************************ 00:01:17.660 END TEST ubsan 00:01:17.660 ************************************ 00:01:17.660 14:28:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.660 14:28:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.660 14:28:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.660 14:28:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.660 14:28:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.919 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.919 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.178 Using 'verbs' RDMA provider 00:01:31.342 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.576 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.576 Creating mk/config.mk...done. 00:01:43.576 Creating mk/cc.flags.mk...done. 00:01:43.576 Type 'make' to build. 00:01:43.576 14:29:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:43.576 14:29:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.576 14:29:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.576 14:29:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.576 ************************************ 00:01:43.576 START TEST make 00:01:43.576 ************************************ 00:01:43.577 14:29:02 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:43.577 make[1]: Nothing to be done for 'all'. 00:01:44.148 The Meson build system 00:01:44.149 Version: 1.3.1 00:01:44.149 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.149 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.149 Build type: native build 00:01:44.149 Project name: libvfio-user 00:01:44.149 Project version: 0.0.1 00:01:44.149 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.149 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.149 Host machine cpu family: x86_64 00:01:44.149 Host machine cpu: x86_64 00:01:44.149 Run-time dependency threads found: YES 00:01:44.149 Library dl found: YES 00:01:44.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.149 Run-time dependency json-c found: YES 0.17 00:01:44.149 Run-time dependency cmocka found: YES 1.1.7 00:01:44.149 Program pytest-3 found: NO 00:01:44.149 Program flake8 found: NO 00:01:44.149 Program misspell-fixer found: NO 00:01:44.149 Program restructuredtext-lint found: NO 00:01:44.149 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.149 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.149 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.149 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.149 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.149 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.149 Build targets in project: 8 00:01:44.149 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.149 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.149 00:01:44.149 libvfio-user 0.0.1 00:01:44.149 00:01:44.149 User defined options 00:01:44.149 buildtype : debug 00:01:44.149 default_library: shared 00:01:44.149 libdir : /usr/local/lib 00:01:44.149 00:01:44.149 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.407 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.666 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.666 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.666 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.666 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.666 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.666 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.666 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.666 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.666 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.666 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.666 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.666 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.666 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.666 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.666 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.666 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.666 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.666 [18/37] Compiling C object samples/server.p/server.c.o 00:01:44.666 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.666 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.666 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.666 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.666 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.666 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.666 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.666 [26/37] Compiling C object samples/client.p/client.c.o 00:01:44.666 [27/37] Linking target samples/client 00:01:44.666 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.666 [29/37] Linking target test/unit_tests 00:01:44.924 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.924 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.924 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.924 [33/37] Linking target samples/lspci 00:01:44.924 [34/37] Linking target samples/server 00:01:44.924 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.924 [36/37] Linking target samples/null 00:01:44.924 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:45.212 INFO: autodetecting backend as ninja 00:01:45.212 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.212 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.474 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.474 ninja: no work to do. 00:01:50.756 The Meson build system 00:01:50.756 Version: 1.3.1 00:01:50.756 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.756 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.756 Build type: native build 00:01:50.756 Program cat found: YES (/usr/bin/cat) 00:01:50.756 Project name: DPDK 00:01:50.756 Project version: 24.03.0 00:01:50.756 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.756 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.756 Host machine cpu family: x86_64 00:01:50.756 Host machine cpu: x86_64 00:01:50.756 Message: ## Building in Developer Mode ## 00:01:50.756 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.756 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.756 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.756 Program python3 found: YES (/usr/bin/python3) 00:01:50.756 Program cat found: YES (/usr/bin/cat) 00:01:50.756 Compiler for C supports arguments -march=native: YES 00:01:50.756 Checking for size of "void *" : 8 00:01:50.756 Checking for size of "void *" : 8 (cached) 00:01:50.756 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.756 Library m found: YES 00:01:50.756 Library numa found: YES 00:01:50.756 Has header "numaif.h" : YES 00:01:50.756 Library fdt found: NO 00:01:50.756 Library execinfo found: NO 00:01:50.756 Has header "execinfo.h" : YES 00:01:50.756 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.756 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.756 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.756 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.756 Run-time dependency openssl found: YES 3.0.9 00:01:50.756 Run-time dependency libpcap found: YES 1.10.4 00:01:50.756 Has header "pcap.h" with dependency libpcap: YES 00:01:50.756 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.756 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.756 Compiler for C supports arguments -Wformat: YES 00:01:50.756 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.756 Compiler for C supports arguments -Wformat-security: NO 00:01:50.756 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.756 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.756 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.756 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.756 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.756 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.756 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.756 Compiler for C supports arguments -Wundef: YES 00:01:50.756 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.756 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.756 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.756 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.756 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.756 Program objdump found: YES (/usr/bin/objdump) 00:01:50.756 Compiler for C supports arguments -mavx512f: YES 00:01:50.756 Checking if "AVX512 checking" compiles: YES 00:01:50.756 Fetching value of define "__SSE4_2__" : 1 00:01:50.756 Fetching value of define "__AES__" : 1 00:01:50.756 Fetching value of define "__AVX__" : 1 00:01:50.756 Fetching value of define "__AVX2__" : 1 00:01:50.756 Fetching value of define "__AVX512BW__" : 1 00:01:50.756 Fetching value of define "__AVX512CD__" : 1 00:01:50.756 Fetching value of define "__AVX512DQ__" : 1 00:01:50.756 Fetching value of define "__AVX512F__" : 1 00:01:50.756 Fetching value of define "__AVX512VL__" : 1 00:01:50.756 Fetching value of define "__PCLMUL__" : 1 00:01:50.756 Fetching value of define "__RDRND__" : 1 00:01:50.756 Fetching value of define "__RDSEED__" : 1 00:01:50.756 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.756 Fetching value of define "__znver1__" : (undefined) 00:01:50.756 Fetching value of define "__znver2__" : (undefined) 00:01:50.756 Fetching value of define "__znver3__" : (undefined) 00:01:50.756 Fetching value of define "__znver4__" : (undefined) 00:01:50.756 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.756 Message: lib/log: Defining dependency "log" 00:01:50.756 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.756 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.756 Checking for function "getentropy" : NO 00:01:50.756 Message: lib/eal: Defining dependency "eal" 00:01:50.756 Message: lib/ring: Defining dependency "ring" 00:01:50.756 Message: lib/rcu: Defining dependency "rcu" 00:01:50.756 Message: lib/mempool: Defining dependency "mempool" 00:01:50.756 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.756 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.756 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.756 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.756 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.756 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.756 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:50.756 Compiler for C supports arguments -mpclmul: YES 00:01:50.756 Compiler for C supports arguments -maes: YES 00:01:50.756 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.756 Compiler for C supports arguments -mavx512bw: YES 00:01:50.756 Compiler for C supports arguments -mavx512dq: YES 00:01:50.756 Compiler for C supports arguments -mavx512vl: YES 00:01:50.756 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.756 Compiler for C supports arguments -mavx2: YES 00:01:50.756 Compiler for C supports arguments -mavx: YES 00:01:50.756 Message: lib/net: Defining dependency "net" 00:01:50.756 Message: lib/meter: Defining dependency "meter" 00:01:50.756 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.756 Message: lib/pci: Defining dependency "pci" 00:01:50.756 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.756 Message: lib/hash: Defining dependency "hash" 00:01:50.756 Message: lib/timer: Defining dependency "timer" 00:01:50.756 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.756 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.756 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.756 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.757 Message: lib/power: Defining dependency "power" 00:01:50.757 Message: lib/reorder: Defining dependency "reorder" 00:01:50.757 Message: lib/security: Defining dependency "security" 00:01:50.757 Has header "linux/userfaultfd.h" : YES 00:01:50.757 Has header "linux/vduse.h" : YES 00:01:50.757 Message: lib/vhost: Defining dependency "vhost" 00:01:50.757 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.757 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.757 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.757 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.757 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.757 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.757 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.757 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.757 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.757 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.757 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.757 Configuring doxy-api-html.conf using configuration 00:01:50.757 Configuring doxy-api-man.conf using configuration 00:01:50.757 Program mandb found: YES (/usr/bin/mandb) 00:01:50.757 Program sphinx-build found: NO 00:01:50.757 Configuring rte_build_config.h using configuration 00:01:50.757 Message: 00:01:50.757 ================= 00:01:50.757 Applications Enabled 00:01:50.757 ================= 00:01:50.757 00:01:50.757 apps: 00:01:50.757 00:01:50.757 00:01:50.757 Message: 00:01:50.757 ================= 00:01:50.757 Libraries Enabled 00:01:50.757 ================= 00:01:50.757 00:01:50.757 libs: 00:01:50.757 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.757 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.757 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.757 00:01:50.757 Message: 00:01:50.757 =============== 00:01:50.757 Drivers Enabled 00:01:50.757 =============== 00:01:50.757 00:01:50.757 common: 00:01:50.757 00:01:50.757 bus: 00:01:50.757 pci, vdev, 00:01:50.757 mempool: 00:01:50.757 ring, 00:01:50.757 dma: 00:01:50.757 00:01:50.757 net: 00:01:50.757 00:01:50.757 crypto: 00:01:50.757 00:01:50.757 compress: 00:01:50.757 00:01:50.757 vdpa: 00:01:50.757 00:01:50.757 00:01:50.757 Message: 00:01:50.757 ================= 00:01:50.757 Content Skipped 00:01:50.757 ================= 00:01:50.757 00:01:50.757 apps: 00:01:50.757 dumpcap: explicitly disabled via build config 00:01:50.757 graph: explicitly disabled via build config 00:01:50.757 pdump: explicitly disabled via build config 00:01:50.757 proc-info: explicitly disabled via build config 00:01:50.757 test-acl: explicitly disabled via build config 00:01:50.757 test-bbdev: explicitly disabled via build config 00:01:50.757 test-cmdline: explicitly disabled via build config 00:01:50.757 test-compress-perf: explicitly disabled via build config 00:01:50.757 test-crypto-perf: explicitly disabled via build config 00:01:50.757 test-dma-perf: explicitly disabled via build config 00:01:50.757 test-eventdev: explicitly disabled via build config 00:01:50.757 test-fib: explicitly disabled via build config 00:01:50.757 test-flow-perf: explicitly disabled via build config 00:01:50.757 test-gpudev: explicitly disabled via build config 00:01:50.757 test-mldev: explicitly disabled via build config 00:01:50.757 test-pipeline: explicitly disabled via build config 00:01:50.757 test-pmd: explicitly disabled via build config 00:01:50.757 test-regex: explicitly disabled via build config 00:01:50.757 test-sad: explicitly disabled via build config 00:01:50.757 test-security-perf: explicitly disabled via build config 00:01:50.757 00:01:50.757 libs: 00:01:50.757 argparse: explicitly disabled via build config 00:01:50.757 metrics: explicitly disabled via build config 00:01:50.757 acl: explicitly disabled via build config 00:01:50.757 bbdev: explicitly disabled via build config 00:01:50.757 bitratestats: explicitly disabled via build config 00:01:50.757 bpf: explicitly disabled via build config 00:01:50.757 cfgfile: explicitly disabled via build config 00:01:50.757 distributor: explicitly disabled via build config 00:01:50.757 efd: explicitly disabled via build config 00:01:50.757 eventdev: explicitly disabled via build config 00:01:50.757 dispatcher: explicitly disabled via build config 00:01:50.757 gpudev: explicitly disabled via build config 00:01:50.757 gro: explicitly disabled via build config 00:01:50.757 gso: explicitly disabled via build config 00:01:50.757 ip_frag: explicitly disabled via build config 00:01:50.757 jobstats: explicitly disabled via build config 00:01:50.757 latencystats: explicitly disabled via build config 00:01:50.757 lpm: explicitly disabled via build config 00:01:50.757 member: explicitly disabled via build config 00:01:50.757 pcapng: explicitly disabled via build config 00:01:50.757 rawdev: explicitly disabled via build config 00:01:50.757 regexdev: explicitly disabled via build config 00:01:50.757 mldev: explicitly disabled via build config 00:01:50.757 rib: explicitly disabled via build config 00:01:50.757 sched: explicitly disabled via build config 00:01:50.757 stack: explicitly disabled via build config 00:01:50.757 ipsec: explicitly disabled via build config 00:01:50.757 pdcp: explicitly disabled via build config 00:01:50.757 fib: explicitly disabled via build config 00:01:50.757 port: explicitly disabled via build config 00:01:50.757 pdump: explicitly disabled via build config 00:01:50.757 table: explicitly disabled via build config 00:01:50.757 pipeline: explicitly disabled via build config 00:01:50.757 graph: explicitly disabled via build config 00:01:50.757 node: explicitly disabled via build config 00:01:50.757 00:01:50.757 drivers: 00:01:50.757 common/cpt: not in enabled drivers build config 00:01:50.757 common/dpaax: not in enabled drivers build config 00:01:50.757 common/iavf: not in enabled drivers build config 00:01:50.757 common/idpf: not in enabled drivers build config 00:01:50.757 common/ionic: not in enabled drivers build config 00:01:50.757 common/mvep: not in enabled drivers build config 00:01:50.757 common/octeontx: not in enabled drivers build config 00:01:50.757 bus/auxiliary: not in enabled drivers build config 00:01:50.757 bus/cdx: not in enabled drivers build config 00:01:50.757 bus/dpaa: not in enabled drivers build config 00:01:50.757 bus/fslmc: not in enabled drivers build config 00:01:50.757 bus/ifpga: not in enabled drivers build config 00:01:50.757 bus/platform: not in enabled drivers build config 00:01:50.757 bus/uacce: not in enabled drivers build config 00:01:50.757 bus/vmbus: not in enabled drivers build config 00:01:50.757 common/cnxk: not in enabled drivers build config 00:01:50.757 common/mlx5: not in enabled drivers build config 00:01:50.757 common/nfp: not in enabled drivers build config 00:01:50.757 common/nitrox: not in enabled drivers build config 00:01:50.757 common/qat: not in enabled drivers build config 00:01:50.757 common/sfc_efx: not in enabled drivers build config 00:01:50.757 mempool/bucket: not in enabled drivers build config 00:01:50.757 mempool/cnxk: not in enabled drivers build config 00:01:50.757 mempool/dpaa: not in enabled drivers build config 00:01:50.757 mempool/dpaa2: not in enabled drivers build config 00:01:50.757 mempool/octeontx: not in enabled drivers build config 00:01:50.757 mempool/stack: not in enabled drivers build config 00:01:50.757 dma/cnxk: not in enabled drivers build config 00:01:50.757 dma/dpaa: not in enabled drivers build config 00:01:50.757 dma/dpaa2: not in enabled drivers build config 00:01:50.757 dma/hisilicon: not in enabled drivers build config 00:01:50.757 dma/idxd: not in enabled drivers build config 00:01:50.757 dma/ioat: not in enabled drivers build config 00:01:50.757 dma/skeleton: not in enabled drivers build config 00:01:50.757 net/af_packet: not in enabled drivers build config 00:01:50.757 net/af_xdp: not in enabled drivers build config 00:01:50.757 net/ark: not in enabled drivers build config 00:01:50.757 net/atlantic: not in enabled drivers build config 00:01:50.757 net/avp: not in enabled drivers build config 00:01:50.757 net/axgbe: not in enabled drivers build config 00:01:50.757 net/bnx2x: not in enabled drivers build config 00:01:50.757 net/bnxt: not in enabled drivers build config 00:01:50.757 net/bonding: not in enabled drivers build config 00:01:50.757 net/cnxk: not in enabled drivers build config 00:01:50.757 net/cpfl: not in enabled drivers build config 00:01:50.757 net/cxgbe: not in enabled drivers build config 00:01:50.757 net/dpaa: not in enabled drivers build config 00:01:50.757 net/dpaa2: not in enabled drivers build config 00:01:50.757 net/e1000: not in enabled drivers build config 00:01:50.757 net/ena: not in enabled drivers build config 00:01:50.757 net/enetc: not in enabled drivers build config 00:01:50.757 net/enetfec: not in enabled drivers build config 00:01:50.757 net/enic: not in enabled drivers build config 00:01:50.757 net/failsafe: not in enabled drivers build config 00:01:50.757 net/fm10k: not in enabled drivers build config 00:01:50.757 net/gve: not in enabled drivers build config 00:01:50.757 net/hinic: not in enabled drivers build config 00:01:50.757 net/hns3: not in enabled drivers build config 00:01:50.757 net/i40e: not in enabled drivers build config 00:01:50.757 net/iavf: not in enabled drivers build config 00:01:50.757 net/ice: not in enabled drivers build config 00:01:50.757 net/idpf: not in enabled drivers build config 00:01:50.757 net/igc: not in enabled drivers build config 00:01:50.757 net/ionic: not in enabled drivers build config 00:01:50.757 net/ipn3ke: not in enabled drivers build config 00:01:50.757 net/ixgbe: not in enabled drivers build config 00:01:50.757 net/mana: not in enabled drivers build config 00:01:50.757 net/memif: not in enabled drivers build config 00:01:50.757 net/mlx4: not in enabled drivers build config 00:01:50.758 net/mlx5: not in enabled drivers build config 00:01:50.758 net/mvneta: not in enabled drivers build config 00:01:50.758 net/mvpp2: not in enabled drivers build config 00:01:50.758 net/netvsc: not in enabled drivers build config 00:01:50.758 net/nfb: not in enabled drivers build config 00:01:50.758 net/nfp: not in enabled drivers build config 00:01:50.758 net/ngbe: not in enabled drivers build config 00:01:50.758 net/null: not in enabled drivers build config 00:01:50.758 net/octeontx: not in enabled drivers build config 00:01:50.758 net/octeon_ep: not in enabled drivers build config 00:01:50.758 net/pcap: not in enabled drivers build config 00:01:50.758 net/pfe: not in enabled drivers build config 00:01:50.758 net/qede: not in enabled drivers build config 00:01:50.758 net/ring: not in enabled drivers build config 00:01:50.758 net/sfc: not in enabled drivers build config 00:01:50.758 net/softnic: not in enabled drivers build config 00:01:50.758 net/tap: not in enabled drivers build config 00:01:50.758 net/thunderx: not in enabled drivers build config 00:01:50.758 net/txgbe: not in enabled drivers build config 00:01:50.758 net/vdev_netvsc: not in enabled drivers build config 00:01:50.758 net/vhost: not in enabled drivers build config 00:01:50.758 net/virtio: not in enabled drivers build config 00:01:50.758 net/vmxnet3: not in enabled drivers build config 00:01:50.758 raw/*: missing internal dependency, "rawdev" 00:01:50.758 crypto/armv8: not in enabled drivers build config 00:01:50.758 crypto/bcmfs: not in enabled drivers build config 00:01:50.758 crypto/caam_jr: not in enabled drivers build config 00:01:50.758 crypto/ccp: not in enabled drivers build config 00:01:50.758 crypto/cnxk: not in enabled drivers build config 00:01:50.758 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.758 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.758 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.758 crypto/mlx5: not in enabled drivers build config 00:01:50.758 crypto/mvsam: not in enabled drivers build config 00:01:50.758 crypto/nitrox: not in enabled drivers build config 00:01:50.758 crypto/null: not in enabled drivers build config 00:01:50.758 crypto/octeontx: not in enabled drivers build config 00:01:50.758 crypto/openssl: not in enabled drivers build config 00:01:50.758 crypto/scheduler: not in enabled drivers build config 00:01:50.758 crypto/uadk: not in enabled drivers build config 00:01:50.758 crypto/virtio: not in enabled drivers build config 00:01:50.758 compress/isal: not in enabled drivers build config 00:01:50.758 compress/mlx5: not in enabled drivers build config 00:01:50.758 compress/nitrox: not in enabled drivers build config 00:01:50.758 compress/octeontx: not in enabled drivers build config 00:01:50.758 compress/zlib: not in enabled drivers build config 00:01:50.758 regex/*: missing internal dependency, "regexdev" 00:01:50.758 ml/*: missing internal dependency, "mldev" 00:01:50.758 vdpa/ifc: not in enabled drivers build config 00:01:50.758 vdpa/mlx5: not in enabled drivers build config 00:01:50.758 vdpa/nfp: not in enabled drivers build config 00:01:50.758 vdpa/sfc: not in enabled drivers build config 00:01:50.758 event/*: missing internal dependency, "eventdev" 00:01:50.758 baseband/*: missing internal dependency, "bbdev" 00:01:50.758 gpu/*: missing internal dependency, "gpudev" 00:01:50.758 00:01:50.758 00:01:50.758 Build targets in project: 85 00:01:50.758 00:01:50.758 DPDK 24.03.0 00:01:50.758 00:01:50.758 User defined options 00:01:50.758 buildtype : debug 00:01:50.758 default_library : shared 00:01:50.758 libdir : lib 00:01:50.758 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.758 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.758 c_link_args : 00:01:50.758 cpu_instruction_set: native 00:01:50.758 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:50.758 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:50.758 enable_docs : false 00:01:50.758 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:50.758 enable_kmods : false 00:01:50.758 max_lcores : 128 00:01:50.758 tests : false 00:01:50.758 00:01:50.758 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.758 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.758 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.758 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.758 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.758 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.758 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.758 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.758 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.758 [8/268] Linking static target lib/librte_kvargs.a 00:01:50.758 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.758 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.758 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.758 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.758 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.758 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.758 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.758 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.021 [17/268] Linking static target lib/librte_log.a 00:01:51.021 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.021 [19/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.021 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.021 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.021 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.021 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.021 [24/268] Linking static target lib/librte_pci.a 00:01:51.021 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.284 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.284 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.284 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.284 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.284 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.284 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.284 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.284 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.284 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.284 [35/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.284 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.284 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.284 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.284 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.284 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.284 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.284 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.284 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.284 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.284 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.284 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.284 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.284 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.284 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.284 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.284 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.284 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.284 [53/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.284 [54/268] Linking static target lib/librte_meter.a 00:01:51.284 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.284 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.284 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.284 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.284 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.284 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.284 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.284 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.284 [63/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.284 [64/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.284 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.284 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.284 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.284 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.284 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.284 [70/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.284 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.284 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.284 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.284 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.284 [75/268] Linking static target lib/librte_telemetry.a 00:01:51.284 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.284 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.284 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.284 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.284 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.284 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.284 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.284 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.284 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.284 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.284 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.284 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.284 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.284 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.284 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.544 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.544 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.544 [93/268] Linking static target lib/librte_ring.a 00:01:51.544 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.544 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.544 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.544 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.544 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.544 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.544 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.544 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.544 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.544 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.544 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.544 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.544 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.544 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.544 [108/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.544 [109/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.544 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.544 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.544 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.544 [113/268] Linking static target lib/librte_rcu.a 00:01:51.544 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.544 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.544 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.544 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.544 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.544 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.544 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.544 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.544 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.544 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.544 [124/268] Linking static target lib/librte_net.a 00:01:51.544 [125/268] Linking static target lib/librte_mempool.a 00:01:51.544 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.544 [127/268] Linking static target lib/librte_eal.a 00:01:51.544 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.544 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.544 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.544 [131/268] Linking static target lib/librte_cmdline.a 00:01:51.544 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.544 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.544 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.544 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.804 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.804 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.804 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.804 [139/268] Linking static target lib/librte_mbuf.a 00:01:51.804 [140/268] Linking target lib/librte_log.so.24.1 00:01:51.804 [141/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.804 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.804 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.804 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.804 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.804 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.804 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.804 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.804 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.804 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.805 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.805 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.805 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:51.805 [154/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.805 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.805 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.805 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.805 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.805 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.805 [160/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:51.805 [161/268] Linking static target lib/librte_timer.a 00:01:51.805 [162/268] Linking static target lib/librte_reorder.a 00:01:51.805 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.805 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.805 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.805 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:51.805 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.805 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.805 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.805 [170/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:51.805 [171/268] Linking target lib/librte_telemetry.so.24.1 00:01:51.805 [172/268] Linking static target lib/librte_compressdev.a 00:01:51.805 [173/268] Linking static target lib/librte_security.a 00:01:51.805 [174/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.805 [175/268] Linking target lib/librte_kvargs.so.24.1 00:01:51.805 [176/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.805 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.805 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.805 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.805 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.805 [181/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.805 [182/268] Linking static target lib/librte_dmadev.a 00:01:52.064 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.064 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.064 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.064 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.064 [187/268] Linking static target lib/librte_power.a 00:01:52.064 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.064 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.064 [190/268] Linking static target lib/librte_hash.a 00:01:52.064 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.064 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.064 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.064 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.064 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.065 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.065 [197/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.065 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.065 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.065 [200/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.065 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.065 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.065 [203/268] Linking static target drivers/librte_mempool_ring.a 00:01:52.065 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.065 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.065 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:52.323 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.323 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.323 [209/268] Linking static target lib/librte_cryptodev.a 00:01:52.323 [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.323 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.323 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.323 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:52.323 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.323 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.323 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.323 [217/268] Linking static target lib/librte_ethdev.a 00:01:52.323 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.323 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.582 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.582 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.582 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.582 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.582 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.840 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.841 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.841 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.775 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.775 [229/268] Linking static target lib/librte_vhost.a 00:01:54.033 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.408 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.676 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.244 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.244 [234/268] Linking target lib/librte_eal.so.24.1 00:02:01.503 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.503 [236/268] Linking target lib/librte_ring.so.24.1 00:02:01.503 [237/268] Linking target lib/librte_pci.so.24.1 00:02:01.503 [238/268] Linking target lib/librte_timer.so.24.1 00:02:01.503 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.503 [240/268] Linking target lib/librte_meter.so.24.1 00:02:01.504 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.504 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.504 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.763 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.763 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.763 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.763 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:01.763 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:01.763 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:01.763 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:01.763 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:01.763 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:01.763 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:02.022 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:02.022 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:02.022 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:02.022 [257/268] Linking target lib/librte_net.so.24.1 00:02:02.022 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:02.281 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:02.281 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:02.281 [261/268] Linking target lib/librte_hash.so.24.1 00:02:02.281 [262/268] Linking target lib/librte_security.so.24.1 00:02:02.281 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:02.281 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.281 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.281 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.539 [267/268] Linking target lib/librte_power.so.24.1 00:02:02.539 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:02.539 INFO: autodetecting backend as ninja 00:02:02.539 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:03.476 CC lib/log/log.o 00:02:03.476 CC lib/log/log_deprecated.o 00:02:03.476 CC lib/ut_mock/mock.o 00:02:03.476 CC lib/log/log_flags.o 00:02:03.476 CC lib/ut/ut.o 00:02:03.476 LIB libspdk_ut.a 00:02:03.476 LIB libspdk_log.a 00:02:03.734 LIB libspdk_ut_mock.a 00:02:03.734 SO libspdk_ut.so.2.0 00:02:03.734 SO libspdk_log.so.7.0 00:02:03.734 SO libspdk_ut_mock.so.6.0 00:02:03.734 SYMLINK libspdk_ut.so 00:02:03.734 SYMLINK libspdk_log.so 00:02:03.734 SYMLINK libspdk_ut_mock.so 00:02:03.991 CC lib/ioat/ioat.o 00:02:03.991 CC lib/dma/dma.o 00:02:03.991 CXX lib/trace_parser/trace.o 00:02:03.991 CC lib/util/base64.o 00:02:03.991 CC lib/util/bit_array.o 00:02:03.991 CC lib/util/crc16.o 00:02:03.991 CC lib/util/cpuset.o 00:02:03.991 CC lib/util/crc32.o 00:02:03.991 CC lib/util/crc32c.o 00:02:03.991 CC lib/util/crc32_ieee.o 00:02:03.991 CC lib/util/crc64.o 00:02:03.991 CC lib/util/file.o 00:02:03.991 CC lib/util/dif.o 00:02:03.991 CC lib/util/fd.o 00:02:03.991 CC lib/util/math.o 00:02:03.991 CC lib/util/hexlify.o 00:02:03.991 CC lib/util/iov.o 00:02:03.991 CC lib/util/pipe.o 00:02:03.991 CC lib/util/strerror_tls.o 00:02:03.991 CC lib/util/string.o 00:02:03.991 CC lib/util/uuid.o 00:02:03.991 CC lib/util/fd_group.o 00:02:03.991 CC lib/util/xor.o 00:02:03.991 CC lib/util/zipf.o 00:02:04.249 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.249 CC lib/vfio_user/host/vfio_user.o 00:02:04.249 LIB libspdk_dma.a 00:02:04.249 SO libspdk_dma.so.4.0 00:02:04.249 SYMLINK libspdk_dma.so 00:02:04.249 LIB libspdk_ioat.a 00:02:04.249 SO libspdk_ioat.so.7.0 00:02:04.249 SYMLINK libspdk_ioat.so 00:02:04.249 LIB libspdk_vfio_user.a 00:02:04.507 SO libspdk_vfio_user.so.5.0 00:02:04.507 LIB libspdk_util.a 00:02:04.507 SYMLINK libspdk_vfio_user.so 00:02:04.507 SO libspdk_util.so.9.1 00:02:04.507 SYMLINK libspdk_util.so 00:02:04.765 LIB libspdk_trace_parser.a 00:02:04.765 SO libspdk_trace_parser.so.5.0 00:02:04.765 SYMLINK libspdk_trace_parser.so 00:02:04.765 CC lib/vmd/vmd.o 00:02:04.765 CC lib/vmd/led.o 00:02:05.023 CC lib/idxd/idxd_user.o 00:02:05.023 CC lib/idxd/idxd_kernel.o 00:02:05.023 CC lib/idxd/idxd.o 00:02:05.023 CC lib/json/json_parse.o 00:02:05.023 CC lib/json/json_util.o 00:02:05.023 CC lib/json/json_write.o 00:02:05.023 CC lib/rdma_utils/rdma_utils.o 00:02:05.023 CC lib/env_dpdk/env.o 00:02:05.023 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.023 CC lib/rdma_provider/common.o 00:02:05.023 CC lib/env_dpdk/memory.o 00:02:05.023 CC lib/env_dpdk/pci.o 00:02:05.023 CC lib/env_dpdk/init.o 00:02:05.023 CC lib/env_dpdk/threads.o 00:02:05.023 CC lib/conf/conf.o 00:02:05.023 CC lib/env_dpdk/pci_ioat.o 00:02:05.023 CC lib/env_dpdk/pci_virtio.o 00:02:05.023 CC lib/env_dpdk/pci_vmd.o 00:02:05.023 CC lib/env_dpdk/pci_event.o 00:02:05.023 CC lib/env_dpdk/pci_idxd.o 00:02:05.023 CC lib/env_dpdk/sigbus_handler.o 00:02:05.023 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.023 CC lib/env_dpdk/pci_dpdk.o 00:02:05.023 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.023 LIB libspdk_rdma_provider.a 00:02:05.023 SO libspdk_rdma_provider.so.6.0 00:02:05.281 LIB libspdk_conf.a 00:02:05.281 LIB libspdk_rdma_utils.a 00:02:05.281 SO libspdk_conf.so.6.0 00:02:05.281 LIB libspdk_json.a 00:02:05.281 SYMLINK libspdk_rdma_provider.so 00:02:05.281 SO libspdk_rdma_utils.so.1.0 00:02:05.281 SO libspdk_json.so.6.0 00:02:05.281 SYMLINK libspdk_conf.so 00:02:05.281 SYMLINK libspdk_rdma_utils.so 00:02:05.281 SYMLINK libspdk_json.so 00:02:05.281 LIB libspdk_idxd.a 00:02:05.281 SO libspdk_idxd.so.12.0 00:02:05.542 LIB libspdk_vmd.a 00:02:05.542 SO libspdk_vmd.so.6.0 00:02:05.542 SYMLINK libspdk_idxd.so 00:02:05.542 SYMLINK libspdk_vmd.so 00:02:05.542 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.542 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.542 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.542 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.873 LIB libspdk_jsonrpc.a 00:02:05.873 SO libspdk_jsonrpc.so.6.0 00:02:05.873 SYMLINK libspdk_jsonrpc.so 00:02:05.873 LIB libspdk_env_dpdk.a 00:02:06.132 SO libspdk_env_dpdk.so.14.1 00:02:06.132 SYMLINK libspdk_env_dpdk.so 00:02:06.132 CC lib/rpc/rpc.o 00:02:06.391 LIB libspdk_rpc.a 00:02:06.391 SO libspdk_rpc.so.6.0 00:02:06.391 SYMLINK libspdk_rpc.so 00:02:06.650 CC lib/notify/notify.o 00:02:06.650 CC lib/notify/notify_rpc.o 00:02:06.650 CC lib/trace/trace.o 00:02:06.650 CC lib/trace/trace_flags.o 00:02:06.650 CC lib/trace/trace_rpc.o 00:02:06.650 CC lib/keyring/keyring.o 00:02:06.650 CC lib/keyring/keyring_rpc.o 00:02:06.909 LIB libspdk_notify.a 00:02:06.909 SO libspdk_notify.so.6.0 00:02:06.909 LIB libspdk_trace.a 00:02:06.909 LIB libspdk_keyring.a 00:02:06.909 SYMLINK libspdk_notify.so 00:02:06.909 SO libspdk_trace.so.10.0 00:02:06.909 SO libspdk_keyring.so.1.0 00:02:06.909 SYMLINK libspdk_trace.so 00:02:07.168 SYMLINK libspdk_keyring.so 00:02:07.168 CC lib/sock/sock.o 00:02:07.168 CC lib/sock/sock_rpc.o 00:02:07.426 CC lib/thread/thread.o 00:02:07.426 CC lib/thread/iobuf.o 00:02:07.686 LIB libspdk_sock.a 00:02:07.686 SO libspdk_sock.so.10.0 00:02:07.686 SYMLINK libspdk_sock.so 00:02:07.945 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.945 CC lib/nvme/nvme_ctrlr.o 00:02:07.945 CC lib/nvme/nvme_fabric.o 00:02:07.945 CC lib/nvme/nvme_ns_cmd.o 00:02:07.945 CC lib/nvme/nvme_ns.o 00:02:07.945 CC lib/nvme/nvme_pcie_common.o 00:02:07.945 CC lib/nvme/nvme_pcie.o 00:02:07.945 CC lib/nvme/nvme_qpair.o 00:02:07.945 CC lib/nvme/nvme.o 00:02:07.945 CC lib/nvme/nvme_quirks.o 00:02:07.945 CC lib/nvme/nvme_transport.o 00:02:07.945 CC lib/nvme/nvme_discovery.o 00:02:07.945 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.945 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.945 CC lib/nvme/nvme_tcp.o 00:02:07.945 CC lib/nvme/nvme_opal.o 00:02:07.945 CC lib/nvme/nvme_poll_group.o 00:02:07.945 CC lib/nvme/nvme_io_msg.o 00:02:07.945 CC lib/nvme/nvme_zns.o 00:02:07.945 CC lib/nvme/nvme_stubs.o 00:02:07.945 CC lib/nvme/nvme_auth.o 00:02:07.945 CC lib/nvme/nvme_cuse.o 00:02:07.945 CC lib/nvme/nvme_vfio_user.o 00:02:07.945 CC lib/nvme/nvme_rdma.o 00:02:08.514 LIB libspdk_thread.a 00:02:08.514 SO libspdk_thread.so.10.1 00:02:08.514 SYMLINK libspdk_thread.so 00:02:08.773 CC lib/accel/accel.o 00:02:08.773 CC lib/accel/accel_rpc.o 00:02:08.773 CC lib/accel/accel_sw.o 00:02:08.773 CC lib/init/json_config.o 00:02:08.773 CC lib/init/subsystem_rpc.o 00:02:08.773 CC lib/init/subsystem.o 00:02:08.773 CC lib/init/rpc.o 00:02:08.773 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.773 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.773 CC lib/virtio/virtio.o 00:02:08.773 CC lib/virtio/virtio_vhost_user.o 00:02:08.773 CC lib/virtio/virtio_vfio_user.o 00:02:08.773 CC lib/virtio/virtio_pci.o 00:02:08.773 CC lib/blob/blobstore.o 00:02:08.773 CC lib/blob/request.o 00:02:08.773 CC lib/blob/zeroes.o 00:02:08.773 CC lib/blob/blob_bs_dev.o 00:02:09.033 LIB libspdk_init.a 00:02:09.033 SO libspdk_init.so.5.0 00:02:09.033 LIB libspdk_virtio.a 00:02:09.033 LIB libspdk_vfu_tgt.a 00:02:09.033 SYMLINK libspdk_init.so 00:02:09.033 SO libspdk_vfu_tgt.so.3.0 00:02:09.033 SO libspdk_virtio.so.7.0 00:02:09.033 SYMLINK libspdk_vfu_tgt.so 00:02:09.033 SYMLINK libspdk_virtio.so 00:02:09.292 CC lib/event/app.o 00:02:09.292 CC lib/event/reactor.o 00:02:09.292 CC lib/event/log_rpc.o 00:02:09.292 CC lib/event/app_rpc.o 00:02:09.292 CC lib/event/scheduler_static.o 00:02:09.551 LIB libspdk_accel.a 00:02:09.551 SO libspdk_accel.so.15.1 00:02:09.551 LIB libspdk_nvme.a 00:02:09.551 SYMLINK libspdk_accel.so 00:02:09.551 LIB libspdk_event.a 00:02:09.551 SO libspdk_nvme.so.13.1 00:02:09.551 SO libspdk_event.so.14.0 00:02:09.811 SYMLINK libspdk_event.so 00:02:09.811 CC lib/bdev/bdev.o 00:02:09.811 CC lib/bdev/bdev_rpc.o 00:02:09.811 CC lib/bdev/bdev_zone.o 00:02:09.811 CC lib/bdev/part.o 00:02:09.811 CC lib/bdev/scsi_nvme.o 00:02:09.811 SYMLINK libspdk_nvme.so 00:02:10.747 LIB libspdk_blob.a 00:02:10.747 SO libspdk_blob.so.11.0 00:02:11.005 SYMLINK libspdk_blob.so 00:02:11.264 CC lib/lvol/lvol.o 00:02:11.264 CC lib/blobfs/blobfs.o 00:02:11.264 CC lib/blobfs/tree.o 00:02:11.523 LIB libspdk_bdev.a 00:02:11.523 SO libspdk_bdev.so.15.1 00:02:11.782 SYMLINK libspdk_bdev.so 00:02:11.782 LIB libspdk_blobfs.a 00:02:11.782 LIB libspdk_lvol.a 00:02:11.782 SO libspdk_blobfs.so.10.0 00:02:11.782 SO libspdk_lvol.so.10.0 00:02:12.040 SYMLINK libspdk_blobfs.so 00:02:12.040 SYMLINK libspdk_lvol.so 00:02:12.040 CC lib/scsi/lun.o 00:02:12.040 CC lib/scsi/dev.o 00:02:12.040 CC lib/scsi/port.o 00:02:12.040 CC lib/scsi/scsi.o 00:02:12.040 CC lib/scsi/scsi_rpc.o 00:02:12.040 CC lib/scsi/scsi_bdev.o 00:02:12.040 CC lib/scsi/scsi_pr.o 00:02:12.040 CC lib/scsi/task.o 00:02:12.040 CC lib/nbd/nbd.o 00:02:12.040 CC lib/nbd/nbd_rpc.o 00:02:12.040 CC lib/nvmf/ctrlr.o 00:02:12.040 CC lib/nvmf/ctrlr_discovery.o 00:02:12.040 CC lib/nvmf/ctrlr_bdev.o 00:02:12.040 CC lib/nvmf/subsystem.o 00:02:12.040 CC lib/nvmf/nvmf.o 00:02:12.040 CC lib/nvmf/nvmf_rpc.o 00:02:12.040 CC lib/nvmf/tcp.o 00:02:12.040 CC lib/nvmf/transport.o 00:02:12.040 CC lib/ublk/ublk.o 00:02:12.040 CC lib/nvmf/mdns_server.o 00:02:12.040 CC lib/ublk/ublk_rpc.o 00:02:12.040 CC lib/nvmf/stubs.o 00:02:12.040 CC lib/nvmf/rdma.o 00:02:12.040 CC lib/nvmf/vfio_user.o 00:02:12.040 CC lib/nvmf/auth.o 00:02:12.040 CC lib/ftl/ftl_core.o 00:02:12.040 CC lib/ftl/ftl_init.o 00:02:12.040 CC lib/ftl/ftl_layout.o 00:02:12.040 CC lib/ftl/ftl_debug.o 00:02:12.040 CC lib/ftl/ftl_io.o 00:02:12.040 CC lib/ftl/ftl_sb.o 00:02:12.040 CC lib/ftl/ftl_l2p.o 00:02:12.040 CC lib/ftl/ftl_l2p_flat.o 00:02:12.040 CC lib/ftl/ftl_nv_cache.o 00:02:12.040 CC lib/ftl/ftl_band.o 00:02:12.040 CC lib/ftl/ftl_band_ops.o 00:02:12.040 CC lib/ftl/ftl_writer.o 00:02:12.040 CC lib/ftl/ftl_rq.o 00:02:12.040 CC lib/ftl/ftl_reloc.o 00:02:12.040 CC lib/ftl/ftl_l2p_cache.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt.o 00:02:12.040 CC lib/ftl/ftl_p2l.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:12.040 CC lib/ftl/utils/ftl_conf.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:12.040 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:12.040 CC lib/ftl/utils/ftl_md.o 00:02:12.040 CC lib/ftl/utils/ftl_mempool.o 00:02:12.040 CC lib/ftl/utils/ftl_bitmap.o 00:02:12.040 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:12.040 CC lib/ftl/utils/ftl_property.o 00:02:12.040 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:12.040 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:12.040 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:12.040 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:12.040 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:12.040 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:12.040 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:12.040 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:12.041 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:12.041 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:12.041 CC lib/ftl/base/ftl_base_dev.o 00:02:12.041 CC lib/ftl/ftl_trace.o 00:02:12.041 CC lib/ftl/base/ftl_base_bdev.o 00:02:12.607 LIB libspdk_nbd.a 00:02:12.607 SO libspdk_nbd.so.7.0 00:02:12.607 SYMLINK libspdk_nbd.so 00:02:12.607 LIB libspdk_scsi.a 00:02:12.607 SO libspdk_scsi.so.9.0 00:02:12.865 SYMLINK libspdk_scsi.so 00:02:12.865 LIB libspdk_ublk.a 00:02:12.865 SO libspdk_ublk.so.3.0 00:02:12.865 SYMLINK libspdk_ublk.so 00:02:12.865 LIB libspdk_ftl.a 00:02:13.123 CC lib/iscsi/conn.o 00:02:13.123 CC lib/iscsi/init_grp.o 00:02:13.123 CC lib/iscsi/md5.o 00:02:13.123 CC lib/iscsi/iscsi.o 00:02:13.123 CC lib/iscsi/tgt_node.o 00:02:13.123 CC lib/iscsi/param.o 00:02:13.123 CC lib/iscsi/portal_grp.o 00:02:13.123 CC lib/iscsi/iscsi_rpc.o 00:02:13.123 CC lib/iscsi/iscsi_subsystem.o 00:02:13.123 CC lib/iscsi/task.o 00:02:13.123 SO libspdk_ftl.so.9.0 00:02:13.123 CC lib/vhost/vhost_rpc.o 00:02:13.123 CC lib/vhost/vhost.o 00:02:13.123 CC lib/vhost/vhost_scsi.o 00:02:13.123 CC lib/vhost/vhost_blk.o 00:02:13.123 CC lib/vhost/rte_vhost_user.o 00:02:13.381 SYMLINK libspdk_ftl.so 00:02:13.640 LIB libspdk_nvmf.a 00:02:13.898 SO libspdk_nvmf.so.19.0 00:02:13.898 LIB libspdk_vhost.a 00:02:13.898 SO libspdk_vhost.so.8.0 00:02:13.898 SYMLINK libspdk_nvmf.so 00:02:13.898 SYMLINK libspdk_vhost.so 00:02:14.156 LIB libspdk_iscsi.a 00:02:14.156 SO libspdk_iscsi.so.8.0 00:02:14.156 SYMLINK libspdk_iscsi.so 00:02:14.722 CC module/vfu_device/vfu_virtio.o 00:02:14.722 CC module/vfu_device/vfu_virtio_blk.o 00:02:14.722 CC module/vfu_device/vfu_virtio_rpc.o 00:02:14.722 CC module/vfu_device/vfu_virtio_scsi.o 00:02:14.722 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.722 CC module/keyring/file/keyring.o 00:02:14.722 CC module/keyring/file/keyring_rpc.o 00:02:14.722 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.722 CC module/keyring/linux/keyring.o 00:02:14.722 CC module/keyring/linux/keyring_rpc.o 00:02:14.722 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.980 CC module/accel/dsa/accel_dsa.o 00:02:14.980 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.980 CC module/accel/error/accel_error.o 00:02:14.980 CC module/accel/error/accel_error_rpc.o 00:02:14.980 CC module/sock/posix/posix.o 00:02:14.980 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.980 LIB libspdk_env_dpdk_rpc.a 00:02:14.980 CC module/blob/bdev/blob_bdev.o 00:02:14.980 CC module/accel/iaa/accel_iaa.o 00:02:14.980 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.980 CC module/accel/ioat/accel_ioat.o 00:02:14.980 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.980 SO libspdk_env_dpdk_rpc.so.6.0 00:02:14.980 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.981 LIB libspdk_keyring_linux.a 00:02:14.981 LIB libspdk_keyring_file.a 00:02:14.981 LIB libspdk_scheduler_gscheduler.a 00:02:14.981 SO libspdk_keyring_file.so.1.0 00:02:14.981 LIB libspdk_scheduler_dynamic.a 00:02:14.981 LIB libspdk_accel_error.a 00:02:14.981 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.981 SO libspdk_keyring_linux.so.1.0 00:02:14.981 LIB libspdk_accel_ioat.a 00:02:14.981 SO libspdk_scheduler_dynamic.so.4.0 00:02:14.981 SO libspdk_scheduler_gscheduler.so.4.0 00:02:14.981 SO libspdk_accel_error.so.2.0 00:02:14.981 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:14.981 LIB libspdk_accel_iaa.a 00:02:14.981 SO libspdk_accel_ioat.so.6.0 00:02:14.981 SYMLINK libspdk_keyring_linux.so 00:02:14.981 SYMLINK libspdk_keyring_file.so 00:02:14.981 LIB libspdk_accel_dsa.a 00:02:14.981 LIB libspdk_blob_bdev.a 00:02:15.243 SO libspdk_accel_iaa.so.3.0 00:02:15.243 SYMLINK libspdk_scheduler_gscheduler.so 00:02:15.243 SYMLINK libspdk_scheduler_dynamic.so 00:02:15.243 SYMLINK libspdk_accel_error.so 00:02:15.243 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:15.243 SO libspdk_accel_dsa.so.5.0 00:02:15.243 SO libspdk_blob_bdev.so.11.0 00:02:15.243 SYMLINK libspdk_accel_ioat.so 00:02:15.243 SYMLINK libspdk_accel_iaa.so 00:02:15.243 SYMLINK libspdk_accel_dsa.so 00:02:15.243 LIB libspdk_vfu_device.a 00:02:15.243 SYMLINK libspdk_blob_bdev.so 00:02:15.243 SO libspdk_vfu_device.so.3.0 00:02:15.243 SYMLINK libspdk_vfu_device.so 00:02:15.501 LIB libspdk_sock_posix.a 00:02:15.501 SO libspdk_sock_posix.so.6.0 00:02:15.501 SYMLINK libspdk_sock_posix.so 00:02:15.501 CC module/bdev/delay/vbdev_delay.o 00:02:15.501 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:15.501 CC module/bdev/ftl/bdev_ftl.o 00:02:15.501 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:15.501 CC module/bdev/malloc/bdev_malloc.o 00:02:15.501 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:15.501 CC module/bdev/null/bdev_null.o 00:02:15.501 CC module/bdev/error/vbdev_error.o 00:02:15.501 CC module/bdev/gpt/gpt.o 00:02:15.501 CC module/bdev/null/bdev_null_rpc.o 00:02:15.759 CC module/bdev/gpt/vbdev_gpt.o 00:02:15.759 CC module/bdev/error/vbdev_error_rpc.o 00:02:15.759 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:15.759 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.759 CC module/bdev/split/vbdev_split.o 00:02:15.759 CC module/bdev/lvol/vbdev_lvol.o 00:02:15.759 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:15.759 CC module/bdev/split/vbdev_split_rpc.o 00:02:15.759 CC module/bdev/aio/bdev_aio_rpc.o 00:02:15.759 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.759 CC module/bdev/aio/bdev_aio.o 00:02:15.759 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:15.759 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:15.759 CC module/blobfs/bdev/blobfs_bdev.o 00:02:15.759 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:15.759 CC module/bdev/iscsi/bdev_iscsi.o 00:02:15.759 CC module/bdev/nvme/nvme_rpc.o 00:02:15.759 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:15.759 CC module/bdev/passthru/vbdev_passthru.o 00:02:15.759 CC module/bdev/nvme/bdev_nvme.o 00:02:15.759 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:15.759 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:15.759 CC module/bdev/nvme/bdev_mdns_client.o 00:02:15.759 CC module/bdev/nvme/vbdev_opal.o 00:02:15.759 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:15.759 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:15.759 CC module/bdev/raid/bdev_raid.o 00:02:15.759 CC module/bdev/raid/bdev_raid_rpc.o 00:02:15.759 CC module/bdev/raid/bdev_raid_sb.o 00:02:15.759 CC module/bdev/raid/raid0.o 00:02:15.759 CC module/bdev/raid/raid1.o 00:02:15.759 CC module/bdev/raid/concat.o 00:02:16.018 LIB libspdk_blobfs_bdev.a 00:02:16.018 LIB libspdk_bdev_error.a 00:02:16.018 LIB libspdk_bdev_null.a 00:02:16.018 SO libspdk_blobfs_bdev.so.6.0 00:02:16.018 LIB libspdk_bdev_ftl.a 00:02:16.018 LIB libspdk_bdev_split.a 00:02:16.018 SO libspdk_bdev_null.so.6.0 00:02:16.018 SO libspdk_bdev_error.so.6.0 00:02:16.018 LIB libspdk_bdev_gpt.a 00:02:16.018 SO libspdk_bdev_ftl.so.6.0 00:02:16.018 SO libspdk_bdev_split.so.6.0 00:02:16.018 SO libspdk_bdev_gpt.so.6.0 00:02:16.018 LIB libspdk_bdev_delay.a 00:02:16.018 SYMLINK libspdk_blobfs_bdev.so 00:02:16.018 LIB libspdk_bdev_passthru.a 00:02:16.018 SYMLINK libspdk_bdev_error.so 00:02:16.018 LIB libspdk_bdev_aio.a 00:02:16.018 SO libspdk_bdev_delay.so.6.0 00:02:16.018 SYMLINK libspdk_bdev_null.so 00:02:16.018 LIB libspdk_bdev_iscsi.a 00:02:16.018 SYMLINK libspdk_bdev_split.so 00:02:16.018 LIB libspdk_bdev_zone_block.a 00:02:16.018 SO libspdk_bdev_passthru.so.6.0 00:02:16.018 SYMLINK libspdk_bdev_ftl.so 00:02:16.018 SO libspdk_bdev_aio.so.6.0 00:02:16.018 LIB libspdk_bdev_malloc.a 00:02:16.018 SYMLINK libspdk_bdev_gpt.so 00:02:16.018 SO libspdk_bdev_iscsi.so.6.0 00:02:16.018 SO libspdk_bdev_malloc.so.6.0 00:02:16.018 SO libspdk_bdev_zone_block.so.6.0 00:02:16.018 SYMLINK libspdk_bdev_delay.so 00:02:16.018 SYMLINK libspdk_bdev_passthru.so 00:02:16.018 SYMLINK libspdk_bdev_aio.so 00:02:16.018 SYMLINK libspdk_bdev_iscsi.so 00:02:16.018 SYMLINK libspdk_bdev_malloc.so 00:02:16.018 LIB libspdk_bdev_virtio.a 00:02:16.018 SYMLINK libspdk_bdev_zone_block.so 00:02:16.018 LIB libspdk_bdev_lvol.a 00:02:16.276 SO libspdk_bdev_virtio.so.6.0 00:02:16.276 SO libspdk_bdev_lvol.so.6.0 00:02:16.276 SYMLINK libspdk_bdev_virtio.so 00:02:16.276 SYMLINK libspdk_bdev_lvol.so 00:02:16.535 LIB libspdk_bdev_raid.a 00:02:16.535 SO libspdk_bdev_raid.so.6.0 00:02:16.535 SYMLINK libspdk_bdev_raid.so 00:02:17.473 LIB libspdk_bdev_nvme.a 00:02:17.473 SO libspdk_bdev_nvme.so.7.0 00:02:17.473 SYMLINK libspdk_bdev_nvme.so 00:02:18.040 CC module/event/subsystems/sock/sock.o 00:02:18.040 CC module/event/subsystems/iobuf/iobuf.o 00:02:18.040 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:18.040 CC module/event/subsystems/scheduler/scheduler.o 00:02:18.040 CC module/event/subsystems/vmd/vmd.o 00:02:18.040 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:18.040 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:18.040 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:18.040 CC module/event/subsystems/keyring/keyring.o 00:02:18.040 LIB libspdk_event_sock.a 00:02:18.040 LIB libspdk_event_scheduler.a 00:02:18.040 LIB libspdk_event_vmd.a 00:02:18.040 LIB libspdk_event_iobuf.a 00:02:18.040 SO libspdk_event_scheduler.so.4.0 00:02:18.040 LIB libspdk_event_vfu_tgt.a 00:02:18.040 SO libspdk_event_sock.so.5.0 00:02:18.040 LIB libspdk_event_keyring.a 00:02:18.297 LIB libspdk_event_vhost_blk.a 00:02:18.297 SO libspdk_event_iobuf.so.3.0 00:02:18.297 SO libspdk_event_vmd.so.6.0 00:02:18.297 SO libspdk_event_keyring.so.1.0 00:02:18.297 SO libspdk_event_vfu_tgt.so.3.0 00:02:18.297 SO libspdk_event_vhost_blk.so.3.0 00:02:18.297 SYMLINK libspdk_event_scheduler.so 00:02:18.297 SYMLINK libspdk_event_sock.so 00:02:18.297 SYMLINK libspdk_event_keyring.so 00:02:18.297 SYMLINK libspdk_event_iobuf.so 00:02:18.297 SYMLINK libspdk_event_vmd.so 00:02:18.297 SYMLINK libspdk_event_vfu_tgt.so 00:02:18.297 SYMLINK libspdk_event_vhost_blk.so 00:02:18.555 CC module/event/subsystems/accel/accel.o 00:02:18.555 LIB libspdk_event_accel.a 00:02:18.814 SO libspdk_event_accel.so.6.0 00:02:18.814 SYMLINK libspdk_event_accel.so 00:02:19.072 CC module/event/subsystems/bdev/bdev.o 00:02:19.072 LIB libspdk_event_bdev.a 00:02:19.331 SO libspdk_event_bdev.so.6.0 00:02:19.331 SYMLINK libspdk_event_bdev.so 00:02:19.590 CC module/event/subsystems/nbd/nbd.o 00:02:19.590 CC module/event/subsystems/ublk/ublk.o 00:02:19.590 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:19.590 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:19.590 CC module/event/subsystems/scsi/scsi.o 00:02:19.590 LIB libspdk_event_nbd.a 00:02:19.590 SO libspdk_event_nbd.so.6.0 00:02:19.882 LIB libspdk_event_ublk.a 00:02:19.882 LIB libspdk_event_scsi.a 00:02:19.882 SYMLINK libspdk_event_nbd.so 00:02:19.882 SO libspdk_event_ublk.so.3.0 00:02:19.882 SO libspdk_event_scsi.so.6.0 00:02:19.882 LIB libspdk_event_nvmf.a 00:02:19.882 SYMLINK libspdk_event_ublk.so 00:02:19.882 SO libspdk_event_nvmf.so.6.0 00:02:19.882 SYMLINK libspdk_event_scsi.so 00:02:19.882 SYMLINK libspdk_event_nvmf.so 00:02:20.147 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:20.147 CC module/event/subsystems/iscsi/iscsi.o 00:02:20.147 LIB libspdk_event_vhost_scsi.a 00:02:20.407 SO libspdk_event_vhost_scsi.so.3.0 00:02:20.407 LIB libspdk_event_iscsi.a 00:02:20.407 SO libspdk_event_iscsi.so.6.0 00:02:20.407 SYMLINK libspdk_event_vhost_scsi.so 00:02:20.407 SYMLINK libspdk_event_iscsi.so 00:02:20.667 SO libspdk.so.6.0 00:02:20.667 SYMLINK libspdk.so 00:02:20.939 CC test/rpc_client/rpc_client_test.o 00:02:20.939 CC app/spdk_top/spdk_top.o 00:02:20.939 TEST_HEADER include/spdk/accel.h 00:02:20.939 TEST_HEADER include/spdk/assert.h 00:02:20.939 TEST_HEADER include/spdk/accel_module.h 00:02:20.939 TEST_HEADER include/spdk/bdev.h 00:02:20.939 TEST_HEADER include/spdk/barrier.h 00:02:20.939 TEST_HEADER include/spdk/base64.h 00:02:20.939 TEST_HEADER include/spdk/bdev_module.h 00:02:20.939 TEST_HEADER include/spdk/bdev_zone.h 00:02:20.939 CC app/spdk_lspci/spdk_lspci.o 00:02:20.939 TEST_HEADER include/spdk/blob_bdev.h 00:02:20.939 TEST_HEADER include/spdk/bit_array.h 00:02:20.939 TEST_HEADER include/spdk/bit_pool.h 00:02:20.939 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:20.939 TEST_HEADER include/spdk/blobfs.h 00:02:20.939 TEST_HEADER include/spdk/blob.h 00:02:20.939 TEST_HEADER include/spdk/conf.h 00:02:20.939 TEST_HEADER include/spdk/cpuset.h 00:02:20.939 TEST_HEADER include/spdk/config.h 00:02:20.939 TEST_HEADER include/spdk/crc32.h 00:02:20.939 TEST_HEADER include/spdk/crc16.h 00:02:20.939 TEST_HEADER include/spdk/dif.h 00:02:20.939 TEST_HEADER include/spdk/crc64.h 00:02:20.939 TEST_HEADER include/spdk/endian.h 00:02:20.939 TEST_HEADER include/spdk/env_dpdk.h 00:02:20.939 CXX app/trace/trace.o 00:02:20.939 TEST_HEADER include/spdk/dma.h 00:02:20.939 CC app/spdk_nvme_perf/perf.o 00:02:20.939 TEST_HEADER include/spdk/event.h 00:02:20.939 CC app/trace_record/trace_record.o 00:02:20.939 TEST_HEADER include/spdk/env.h 00:02:20.939 TEST_HEADER include/spdk/fd_group.h 00:02:20.939 TEST_HEADER include/spdk/fd.h 00:02:20.939 TEST_HEADER include/spdk/ftl.h 00:02:20.939 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.939 TEST_HEADER include/spdk/file.h 00:02:20.939 CC app/spdk_nvme_identify/identify.o 00:02:20.939 TEST_HEADER include/spdk/hexlify.h 00:02:20.939 TEST_HEADER include/spdk/idxd.h 00:02:20.939 TEST_HEADER include/spdk/histogram_data.h 00:02:20.939 TEST_HEADER include/spdk/init.h 00:02:20.939 TEST_HEADER include/spdk/ioat.h 00:02:20.939 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.939 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.939 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.939 TEST_HEADER include/spdk/json.h 00:02:20.939 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.939 CC app/spdk_nvme_discover/discovery_aer.o 00:02:20.939 TEST_HEADER include/spdk/keyring.h 00:02:20.939 TEST_HEADER include/spdk/keyring_module.h 00:02:20.939 TEST_HEADER include/spdk/likely.h 00:02:20.939 TEST_HEADER include/spdk/lvol.h 00:02:20.939 TEST_HEADER include/spdk/log.h 00:02:20.939 TEST_HEADER include/spdk/memory.h 00:02:20.939 TEST_HEADER include/spdk/mmio.h 00:02:20.939 TEST_HEADER include/spdk/nbd.h 00:02:20.939 TEST_HEADER include/spdk/notify.h 00:02:20.939 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:20.939 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.939 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.939 TEST_HEADER include/spdk/nvme.h 00:02:20.939 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.939 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.939 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.939 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.939 TEST_HEADER include/spdk/nvmf.h 00:02:20.939 CC app/iscsi_tgt/iscsi_tgt.o 00:02:20.939 CC app/nvmf_tgt/nvmf_main.o 00:02:20.939 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.939 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.939 TEST_HEADER include/spdk/opal.h 00:02:20.939 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.939 TEST_HEADER include/spdk/opal_spec.h 00:02:20.939 TEST_HEADER include/spdk/queue.h 00:02:20.939 TEST_HEADER include/spdk/pipe.h 00:02:20.939 TEST_HEADER include/spdk/pci_ids.h 00:02:20.939 CC app/spdk_dd/spdk_dd.o 00:02:20.939 TEST_HEADER include/spdk/rpc.h 00:02:20.939 TEST_HEADER include/spdk/reduce.h 00:02:20.939 TEST_HEADER include/spdk/scsi.h 00:02:20.939 TEST_HEADER include/spdk/scheduler.h 00:02:20.939 TEST_HEADER include/spdk/sock.h 00:02:20.939 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.939 TEST_HEADER include/spdk/stdinc.h 00:02:20.939 TEST_HEADER include/spdk/thread.h 00:02:20.939 TEST_HEADER include/spdk/string.h 00:02:20.939 TEST_HEADER include/spdk/trace.h 00:02:20.939 TEST_HEADER include/spdk/tree.h 00:02:20.939 TEST_HEADER include/spdk/trace_parser.h 00:02:20.939 TEST_HEADER include/spdk/ublk.h 00:02:20.939 TEST_HEADER include/spdk/util.h 00:02:20.939 TEST_HEADER include/spdk/uuid.h 00:02:20.939 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.939 TEST_HEADER include/spdk/version.h 00:02:20.939 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.939 TEST_HEADER include/spdk/vhost.h 00:02:20.939 TEST_HEADER include/spdk/vmd.h 00:02:20.939 TEST_HEADER include/spdk/xor.h 00:02:20.939 TEST_HEADER include/spdk/zipf.h 00:02:20.939 CXX test/cpp_headers/accel.o 00:02:20.939 CXX test/cpp_headers/accel_module.o 00:02:20.939 CXX test/cpp_headers/assert.o 00:02:20.939 CXX test/cpp_headers/barrier.o 00:02:20.939 CXX test/cpp_headers/bdev.o 00:02:20.939 CXX test/cpp_headers/base64.o 00:02:20.939 CXX test/cpp_headers/bdev_module.o 00:02:20.939 CXX test/cpp_headers/bdev_zone.o 00:02:20.939 CXX test/cpp_headers/bit_pool.o 00:02:20.939 CXX test/cpp_headers/bit_array.o 00:02:20.939 CXX test/cpp_headers/blob_bdev.o 00:02:20.939 CXX test/cpp_headers/blobfs.o 00:02:20.939 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.939 CXX test/cpp_headers/blob.o 00:02:20.939 CXX test/cpp_headers/conf.o 00:02:20.939 CXX test/cpp_headers/config.o 00:02:20.939 CXX test/cpp_headers/crc16.o 00:02:20.939 CXX test/cpp_headers/cpuset.o 00:02:20.939 CC app/spdk_tgt/spdk_tgt.o 00:02:20.939 CXX test/cpp_headers/crc32.o 00:02:20.939 CXX test/cpp_headers/crc64.o 00:02:20.939 CXX test/cpp_headers/dma.o 00:02:20.939 CXX test/cpp_headers/dif.o 00:02:20.939 CXX test/cpp_headers/env_dpdk.o 00:02:20.939 CXX test/cpp_headers/endian.o 00:02:20.939 CXX test/cpp_headers/event.o 00:02:20.939 CXX test/cpp_headers/env.o 00:02:20.939 CXX test/cpp_headers/fd.o 00:02:20.939 CXX test/cpp_headers/fd_group.o 00:02:20.939 CXX test/cpp_headers/ftl.o 00:02:20.939 CXX test/cpp_headers/file.o 00:02:20.939 CXX test/cpp_headers/gpt_spec.o 00:02:20.939 CXX test/cpp_headers/histogram_data.o 00:02:20.939 CXX test/cpp_headers/hexlify.o 00:02:20.939 CXX test/cpp_headers/idxd.o 00:02:20.939 CXX test/cpp_headers/idxd_spec.o 00:02:20.939 CXX test/cpp_headers/init.o 00:02:20.939 CXX test/cpp_headers/ioat.o 00:02:20.939 CXX test/cpp_headers/ioat_spec.o 00:02:20.939 CXX test/cpp_headers/iscsi_spec.o 00:02:20.939 CXX test/cpp_headers/jsonrpc.o 00:02:20.939 CXX test/cpp_headers/keyring.o 00:02:20.940 CXX test/cpp_headers/json.o 00:02:20.940 CXX test/cpp_headers/likely.o 00:02:20.940 CXX test/cpp_headers/keyring_module.o 00:02:20.940 CXX test/cpp_headers/log.o 00:02:20.940 CXX test/cpp_headers/memory.o 00:02:20.940 CXX test/cpp_headers/lvol.o 00:02:20.940 CXX test/cpp_headers/nbd.o 00:02:20.940 CXX test/cpp_headers/mmio.o 00:02:20.940 CXX test/cpp_headers/notify.o 00:02:20.940 CXX test/cpp_headers/nvme_intel.o 00:02:20.940 CXX test/cpp_headers/nvme.o 00:02:20.940 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.940 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.940 CXX test/cpp_headers/nvme_spec.o 00:02:20.940 CXX test/cpp_headers/nvme_zns.o 00:02:20.940 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.940 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.940 CXX test/cpp_headers/nvmf.o 00:02:20.940 CXX test/cpp_headers/nvmf_spec.o 00:02:20.940 CXX test/cpp_headers/nvmf_transport.o 00:02:20.940 CXX test/cpp_headers/opal.o 00:02:20.940 CXX test/cpp_headers/opal_spec.o 00:02:20.940 CXX test/cpp_headers/pci_ids.o 00:02:20.940 CXX test/cpp_headers/pipe.o 00:02:20.940 CXX test/cpp_headers/queue.o 00:02:20.940 CXX test/cpp_headers/reduce.o 00:02:20.940 CC test/thread/poller_perf/poller_perf.o 00:02:21.216 CC test/env/pci/pci_ut.o 00:02:21.216 CC test/env/vtophys/vtophys.o 00:02:21.216 CC examples/ioat/perf/perf.o 00:02:21.216 CC examples/ioat/verify/verify.o 00:02:21.216 CC test/env/memory/memory_ut.o 00:02:21.216 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:21.216 CC test/dma/test_dma/test_dma.o 00:02:21.216 CC test/app/stub/stub.o 00:02:21.216 CC test/app/histogram_perf/histogram_perf.o 00:02:21.216 CC app/fio/nvme/fio_plugin.o 00:02:21.216 CC examples/util/zipf/zipf.o 00:02:21.217 CC test/app/jsoncat/jsoncat.o 00:02:21.217 LINK spdk_lspci 00:02:21.217 CC test/app/bdev_svc/bdev_svc.o 00:02:21.217 CC app/fio/bdev/fio_plugin.o 00:02:21.217 LINK rpc_client_test 00:02:21.481 LINK spdk_nvme_discover 00:02:21.481 LINK iscsi_tgt 00:02:21.481 CC test/env/mem_callbacks/mem_callbacks.o 00:02:21.481 LINK spdk_trace_record 00:02:21.481 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:21.481 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:21.481 LINK poller_perf 00:02:21.481 LINK vtophys 00:02:21.481 CXX test/cpp_headers/rpc.o 00:02:21.481 LINK interrupt_tgt 00:02:21.481 CXX test/cpp_headers/scheduler.o 00:02:21.481 CXX test/cpp_headers/scsi.o 00:02:21.481 CXX test/cpp_headers/scsi_spec.o 00:02:21.481 CXX test/cpp_headers/sock.o 00:02:21.481 CXX test/cpp_headers/stdinc.o 00:02:21.481 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:21.481 CXX test/cpp_headers/string.o 00:02:21.481 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:21.740 CXX test/cpp_headers/trace.o 00:02:21.740 CXX test/cpp_headers/trace_parser.o 00:02:21.740 CXX test/cpp_headers/thread.o 00:02:21.740 CXX test/cpp_headers/tree.o 00:02:21.740 LINK jsoncat 00:02:21.740 CXX test/cpp_headers/ublk.o 00:02:21.740 CXX test/cpp_headers/util.o 00:02:21.740 LINK nvmf_tgt 00:02:21.740 CXX test/cpp_headers/uuid.o 00:02:21.740 CXX test/cpp_headers/version.o 00:02:21.740 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.740 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.740 CXX test/cpp_headers/vmd.o 00:02:21.740 CXX test/cpp_headers/xor.o 00:02:21.740 LINK stub 00:02:21.740 CXX test/cpp_headers/zipf.o 00:02:21.740 CXX test/cpp_headers/vhost.o 00:02:21.740 LINK ioat_perf 00:02:21.740 LINK bdev_svc 00:02:21.740 LINK histogram_perf 00:02:21.740 LINK env_dpdk_post_init 00:02:21.740 LINK spdk_tgt 00:02:21.740 LINK zipf 00:02:21.740 LINK spdk_dd 00:02:21.740 LINK verify 00:02:21.740 LINK spdk_trace 00:02:21.740 LINK test_dma 00:02:21.998 LINK pci_ut 00:02:21.998 LINK nvme_fuzz 00:02:21.998 LINK spdk_bdev 00:02:21.998 CC test/event/event_perf/event_perf.o 00:02:21.998 CC test/event/reactor/reactor.o 00:02:21.998 LINK spdk_nvme 00:02:21.998 CC test/event/reactor_perf/reactor_perf.o 00:02:22.257 CC test/event/app_repeat/app_repeat.o 00:02:22.257 LINK vhost_fuzz 00:02:22.257 CC examples/sock/hello_world/hello_sock.o 00:02:22.257 CC test/event/scheduler/scheduler.o 00:02:22.257 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.257 CC examples/vmd/led/led.o 00:02:22.257 LINK spdk_top 00:02:22.257 CC app/vhost/vhost.o 00:02:22.257 CC examples/idxd/perf/perf.o 00:02:22.257 CC examples/thread/thread/thread_ex.o 00:02:22.257 LINK mem_callbacks 00:02:22.257 LINK spdk_nvme_identify 00:02:22.257 LINK spdk_nvme_perf 00:02:22.257 LINK event_perf 00:02:22.257 LINK reactor 00:02:22.257 LINK reactor_perf 00:02:22.257 CC test/nvme/overhead/overhead.o 00:02:22.257 CC test/nvme/startup/startup.o 00:02:22.257 CC test/nvme/err_injection/err_injection.o 00:02:22.257 CC test/nvme/boot_partition/boot_partition.o 00:02:22.257 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.257 CC test/nvme/aer/aer.o 00:02:22.257 LINK lsvmd 00:02:22.257 CC test/nvme/sgl/sgl.o 00:02:22.257 CC test/nvme/reset/reset.o 00:02:22.257 CC test/nvme/cuse/cuse.o 00:02:22.257 CC test/nvme/compliance/nvme_compliance.o 00:02:22.257 CC test/nvme/e2edp/nvme_dp.o 00:02:22.257 CC test/nvme/fdp/fdp.o 00:02:22.257 CC test/nvme/simple_copy/simple_copy.o 00:02:22.257 CC test/nvme/reserve/reserve.o 00:02:22.257 CC test/nvme/connect_stress/connect_stress.o 00:02:22.257 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.257 LINK app_repeat 00:02:22.257 CC test/blobfs/mkfs/mkfs.o 00:02:22.257 CC test/accel/dif/dif.o 00:02:22.257 LINK led 00:02:22.515 LINK hello_sock 00:02:22.515 LINK vhost 00:02:22.516 CC test/lvol/esnap/esnap.o 00:02:22.516 LINK scheduler 00:02:22.516 LINK thread 00:02:22.516 LINK boot_partition 00:02:22.516 LINK startup 00:02:22.516 LINK err_injection 00:02:22.516 LINK idxd_perf 00:02:22.516 LINK memory_ut 00:02:22.516 LINK fused_ordering 00:02:22.516 LINK connect_stress 00:02:22.516 LINK doorbell_aers 00:02:22.516 LINK reserve 00:02:22.516 LINK simple_copy 00:02:22.516 LINK overhead 00:02:22.516 LINK sgl 00:02:22.516 LINK mkfs 00:02:22.516 LINK reset 00:02:22.516 LINK aer 00:02:22.516 LINK nvme_dp 00:02:22.516 LINK nvme_compliance 00:02:22.516 LINK fdp 00:02:22.775 LINK dif 00:02:22.775 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.775 CC examples/nvme/arbitration/arbitration.o 00:02:22.775 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.775 CC examples/nvme/hello_world/hello_world.o 00:02:22.775 CC examples/nvme/reconnect/reconnect.o 00:02:22.775 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.775 CC examples/nvme/abort/abort.o 00:02:22.775 CC examples/nvme/hotplug/hotplug.o 00:02:23.034 CC examples/accel/perf/accel_perf.o 00:02:23.034 CC examples/blob/cli/blobcli.o 00:02:23.034 LINK cmb_copy 00:02:23.034 CC examples/blob/hello_world/hello_blob.o 00:02:23.034 LINK pmr_persistence 00:02:23.034 LINK iscsi_fuzz 00:02:23.034 LINK hello_world 00:02:23.034 LINK hotplug 00:02:23.034 LINK arbitration 00:02:23.034 LINK reconnect 00:02:23.034 LINK abort 00:02:23.305 LINK nvme_manage 00:02:23.305 LINK hello_blob 00:02:23.305 CC test/bdev/bdevio/bdevio.o 00:02:23.305 LINK cuse 00:02:23.305 LINK accel_perf 00:02:23.305 LINK blobcli 00:02:23.565 LINK bdevio 00:02:23.824 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.824 CC examples/bdev/hello_world/hello_bdev.o 00:02:24.084 LINK hello_bdev 00:02:24.345 LINK bdevperf 00:02:24.915 CC examples/nvmf/nvmf/nvmf.o 00:02:24.915 LINK nvmf 00:02:25.852 LINK esnap 00:02:26.112 00:02:26.112 real 0m43.680s 00:02:26.112 user 6m29.723s 00:02:26.112 sys 3m26.085s 00:02:26.112 14:29:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:26.112 14:29:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.112 ************************************ 00:02:26.112 END TEST make 00:02:26.112 ************************************ 00:02:26.112 14:29:46 -- common/autotest_common.sh@1142 -- $ return 0 00:02:26.112 14:29:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.112 14:29:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.112 14:29:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.112 14:29:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.112 14:29:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.112 14:29:46 -- pm/common@44 -- $ pid=2037293 00:02:26.112 14:29:46 -- pm/common@50 -- $ kill -TERM 2037293 00:02:26.112 14:29:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.112 14:29:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.112 14:29:46 -- pm/common@44 -- $ pid=2037295 00:02:26.112 14:29:46 -- pm/common@50 -- $ kill -TERM 2037295 00:02:26.112 14:29:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.112 14:29:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.112 14:29:46 -- pm/common@44 -- $ pid=2037297 00:02:26.112 14:29:46 -- pm/common@50 -- $ kill -TERM 2037297 00:02:26.112 14:29:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.112 14:29:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.112 14:29:46 -- pm/common@44 -- $ pid=2037322 00:02:26.112 14:29:46 -- pm/common@50 -- $ sudo -E kill -TERM 2037322 00:02:26.371 14:29:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.371 14:29:46 -- nvmf/common.sh@7 -- # uname -s 00:02:26.371 14:29:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.371 14:29:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.371 14:29:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.371 14:29:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.371 14:29:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.371 14:29:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.371 14:29:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.371 14:29:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.371 14:29:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.371 14:29:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.371 14:29:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:26.371 14:29:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:26.371 14:29:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.371 14:29:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.371 14:29:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:26.371 14:29:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.371 14:29:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.371 14:29:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.371 14:29:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.371 14:29:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.371 14:29:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.371 14:29:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.371 14:29:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.371 14:29:46 -- paths/export.sh@5 -- # export PATH 00:02:26.371 14:29:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.371 14:29:46 -- nvmf/common.sh@47 -- # : 0 00:02:26.371 14:29:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:26.371 14:29:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:26.371 14:29:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.371 14:29:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.371 14:29:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.371 14:29:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:26.371 14:29:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:26.371 14:29:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:26.371 14:29:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.371 14:29:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.371 14:29:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.371 14:29:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.371 14:29:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.371 14:29:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.371 14:29:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.371 14:29:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.371 14:29:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.371 14:29:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.371 14:29:46 -- spdk/autotest.sh@48 -- # udevadm_pid=2096333 00:02:26.371 14:29:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.371 14:29:46 -- pm/common@17 -- # local monitor 00:02:26.371 14:29:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.371 14:29:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.371 14:29:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.371 14:29:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.371 14:29:46 -- pm/common@21 -- # date +%s 00:02:26.371 14:29:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.371 14:29:46 -- pm/common@21 -- # date +%s 00:02:26.371 14:29:46 -- pm/common@25 -- # sleep 1 00:02:26.371 14:29:46 -- pm/common@21 -- # date +%s 00:02:26.371 14:29:46 -- pm/common@21 -- # date +%s 00:02:26.371 14:29:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721910586 00:02:26.371 14:29:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721910586 00:02:26.371 14:29:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721910586 00:02:26.371 14:29:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721910586 00:02:26.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721910586_collect-vmstat.pm.log 00:02:26.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721910586_collect-cpu-load.pm.log 00:02:26.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721910586_collect-cpu-temp.pm.log 00:02:26.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721910586_collect-bmc-pm.bmc.pm.log 00:02:27.312 14:29:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.312 14:29:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.312 14:29:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:27.312 14:29:47 -- common/autotest_common.sh@10 -- # set +x 00:02:27.312 14:29:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.312 14:29:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:27.312 14:29:47 -- common/autotest_common.sh@10 -- # set +x 00:02:27.312 14:29:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:27.312 14:29:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.312 14:29:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.312 14:29:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:27.312 14:29:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.312 14:29:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.312 14:29:47 -- common/autotest_common.sh@1455 -- # uname 00:02:27.312 14:29:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.312 14:29:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.312 14:29:47 -- common/autotest_common.sh@1475 -- # uname 00:02:27.312 14:29:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.572 14:29:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.572 14:29:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:27.572 14:29:47 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.572 14:29:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:27.572 14:29:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:27.572 --rc lcov_branch_coverage=1 00:02:27.572 --rc lcov_function_coverage=1 00:02:27.572 --rc genhtml_branch_coverage=1 00:02:27.572 --rc genhtml_function_coverage=1 00:02:27.572 --rc genhtml_legend=1 00:02:27.572 --rc geninfo_all_blocks=1 00:02:27.572 ' 00:02:27.572 14:29:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:27.572 --rc lcov_branch_coverage=1 00:02:27.572 --rc lcov_function_coverage=1 00:02:27.572 --rc genhtml_branch_coverage=1 00:02:27.572 --rc genhtml_function_coverage=1 00:02:27.572 --rc genhtml_legend=1 00:02:27.572 --rc geninfo_all_blocks=1 00:02:27.572 ' 00:02:27.572 14:29:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:27.572 --rc lcov_branch_coverage=1 00:02:27.572 --rc lcov_function_coverage=1 00:02:27.572 --rc genhtml_branch_coverage=1 00:02:27.572 --rc genhtml_function_coverage=1 00:02:27.572 --rc genhtml_legend=1 00:02:27.572 --rc geninfo_all_blocks=1 00:02:27.572 --no-external' 00:02:27.572 14:29:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:27.572 --rc lcov_branch_coverage=1 00:02:27.572 --rc lcov_function_coverage=1 00:02:27.572 --rc genhtml_branch_coverage=1 00:02:27.572 --rc genhtml_function_coverage=1 00:02:27.572 --rc genhtml_legend=1 00:02:27.572 --rc geninfo_all_blocks=1 00:02:27.572 --no-external' 00:02:27.572 14:29:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:27.572 lcov: LCOV version 1.14 00:02:27.572 14:29:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:47.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:47.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:47.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:47.912 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:48.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:51.785 14:30:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:51.785 14:30:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:51.785 14:30:12 -- common/autotest_common.sh@10 -- # set +x 00:02:51.785 14:30:12 -- spdk/autotest.sh@91 -- # rm -f 00:02:51.785 14:30:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.322 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:54.322 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:54.322 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:54.322 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:54.322 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:54.322 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:54.582 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:54.582 14:30:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:54.582 14:30:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:54.582 14:30:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:54.582 14:30:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:54.582 14:30:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:54.582 14:30:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:54.582 14:30:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:54.582 14:30:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.582 14:30:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:54.582 14:30:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:54.582 14:30:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:54.582 14:30:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:54.582 14:30:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:54.582 14:30:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:54.582 14:30:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:54.841 No valid GPT data, bailing 00:02:54.841 14:30:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.841 14:30:14 -- scripts/common.sh@391 -- # pt= 00:02:54.841 14:30:14 -- scripts/common.sh@392 -- # return 1 00:02:54.841 14:30:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:54.841 1+0 records in 00:02:54.841 1+0 records out 00:02:54.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482878 s, 217 MB/s 00:02:54.841 14:30:14 -- spdk/autotest.sh@118 -- # sync 00:02:54.841 14:30:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:54.841 14:30:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:54.841 14:30:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.117 14:30:20 -- spdk/autotest.sh@124 -- # uname -s 00:03:00.117 14:30:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:00.117 14:30:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.117 14:30:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.117 14:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.117 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:03:00.117 ************************************ 00:03:00.117 START TEST setup.sh 00:03:00.117 ************************************ 00:03:00.117 14:30:20 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.117 * Looking for test storage... 00:03:00.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.117 14:30:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:00.118 14:30:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.118 14:30:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.118 14:30:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.118 14:30:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.118 14:30:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.118 ************************************ 00:03:00.118 START TEST acl 00:03:00.118 ************************************ 00:03:00.118 14:30:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.118 * Looking for test storage... 00:03:00.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.118 14:30:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.377 14:30:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:00.377 14:30:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:00.377 14:30:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:00.377 14:30:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:00.377 14:30:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:00.377 14:30:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:00.377 14:30:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.377 14:30:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.670 14:30:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.670 14:30:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:03.670 14:30:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.670 14:30:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:03.670 14:30:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.670 14:30:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.314 Hugepages 00:03:06.314 node hugesize free / total 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 00:03:06.314 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:06.314 14:30:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:06.314 14:30:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.314 14:30:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.314 14:30:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.314 ************************************ 00:03:06.314 START TEST denied 00:03:06.314 ************************************ 00:03:06.314 14:30:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:06.314 14:30:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:06.314 14:30:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:06.314 14:30:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:06.314 14:30:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.314 14:30:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.850 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:08.850 14:30:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.851 14:30:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.041 00:03:13.041 real 0m6.596s 00:03:13.041 user 0m2.185s 00:03:13.041 sys 0m3.772s 00:03:13.041 14:30:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.041 14:30:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:13.041 ************************************ 00:03:13.041 END TEST denied 00:03:13.041 ************************************ 00:03:13.041 14:30:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:13.041 14:30:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.041 14:30:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.041 14:30:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.041 14:30:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.041 ************************************ 00:03:13.041 START TEST allowed 00:03:13.041 ************************************ 00:03:13.041 14:30:33 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:13.041 14:30:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:13.041 14:30:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:13.041 14:30:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:13.041 14:30:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.041 14:30:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.233 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.233 14:30:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:17.233 14:30:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:17.233 14:30:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:17.233 14:30:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.233 14:30:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.768 00:03:19.768 real 0m6.749s 00:03:19.768 user 0m2.188s 00:03:19.768 sys 0m3.785s 00:03:19.768 14:30:39 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.768 14:30:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:19.768 ************************************ 00:03:19.768 END TEST allowed 00:03:19.768 ************************************ 00:03:19.768 14:30:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:19.768 00:03:19.768 real 0m19.469s 00:03:19.768 user 0m6.723s 00:03:19.768 sys 0m11.545s 00:03:19.768 14:30:39 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.768 14:30:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:19.768 ************************************ 00:03:19.768 END TEST acl 00:03:19.768 ************************************ 00:03:19.768 14:30:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:19.768 14:30:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.768 14:30:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.768 14:30:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.768 14:30:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.768 ************************************ 00:03:19.768 START TEST hugepages 00:03:19.768 ************************************ 00:03:19.768 14:30:39 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.768 * Looking for test storage... 00:03:19.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168365580 kB' 'MemAvailable: 171599856 kB' 'Buffers: 3896 kB' 'Cached: 14666816 kB' 'SwapCached: 0 kB' 'Active: 11528440 kB' 'Inactive: 3694312 kB' 'Active(anon): 11110484 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555264 kB' 'Mapped: 197004 kB' 'Shmem: 10558444 kB' 'KReclaimable: 532448 kB' 'Slab: 1188312 kB' 'SReclaimable: 532448 kB' 'SUnreclaim: 655864 kB' 'KernelStack: 20576 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12645620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317048 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.768 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.769 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.770 14:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:19.770 14:30:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:19.770 14:30:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.770 14:30:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.770 14:30:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.770 ************************************ 00:03:19.770 START TEST default_setup 00:03:19.770 ************************************ 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.770 14:30:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.369 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.369 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.629 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.573 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170506700 kB' 'MemAvailable: 173740928 kB' 'Buffers: 3896 kB' 'Cached: 14666928 kB' 'SwapCached: 0 kB' 'Active: 11549352 kB' 'Inactive: 3694312 kB' 'Active(anon): 11131396 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575776 kB' 'Mapped: 197000 kB' 'Shmem: 10558556 kB' 'KReclaimable: 532352 kB' 'Slab: 1187096 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654744 kB' 'KernelStack: 20624 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.574 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170512588 kB' 'MemAvailable: 173746816 kB' 'Buffers: 3896 kB' 'Cached: 14666928 kB' 'SwapCached: 0 kB' 'Active: 11550016 kB' 'Inactive: 3694312 kB' 'Active(anon): 11132060 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576468 kB' 'Mapped: 197060 kB' 'Shmem: 10558556 kB' 'KReclaimable: 532352 kB' 'Slab: 1187080 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654728 kB' 'KernelStack: 20960 kB' 'PageTables: 9764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.576 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170509720 kB' 'MemAvailable: 173743948 kB' 'Buffers: 3896 kB' 'Cached: 14666948 kB' 'SwapCached: 0 kB' 'Active: 11548548 kB' 'Inactive: 3694312 kB' 'Active(anon): 11130592 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575396 kB' 'Mapped: 196924 kB' 'Shmem: 10558576 kB' 'KReclaimable: 532352 kB' 'Slab: 1187032 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654680 kB' 'KernelStack: 20928 kB' 'PageTables: 10104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.578 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.579 nr_hugepages=1024 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.579 resv_hugepages=0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.579 surplus_hugepages=0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.579 anon_hugepages=0 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170508876 kB' 'MemAvailable: 173743104 kB' 'Buffers: 3896 kB' 'Cached: 14666972 kB' 'SwapCached: 0 kB' 'Active: 11548772 kB' 'Inactive: 3694312 kB' 'Active(anon): 11130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575468 kB' 'Mapped: 196924 kB' 'Shmem: 10558600 kB' 'KReclaimable: 532352 kB' 'Slab: 1187032 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654680 kB' 'KernelStack: 20992 kB' 'PageTables: 10224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12677940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.579 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:23.580 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91331980 kB' 'MemUsed: 6283648 kB' 'SwapCached: 0 kB' 'Active: 2542936 kB' 'Inactive: 219240 kB' 'Active(anon): 2381112 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633440 kB' 'Mapped: 72084 kB' 'AnonPages: 131872 kB' 'Shmem: 2252376 kB' 'KernelStack: 11784 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 665780 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.581 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.582 node0=1024 expecting 1024 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.582 00:03:23.582 real 0m3.771s 00:03:23.582 user 0m1.150s 00:03:23.582 sys 0m1.811s 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.582 14:30:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:23.582 ************************************ 00:03:23.582 END TEST default_setup 00:03:23.582 ************************************ 00:03:23.582 14:30:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.582 14:30:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:23.582 14:30:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.582 14:30:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.582 14:30:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.843 ************************************ 00:03:23.843 START TEST per_node_1G_alloc 00:03:23.843 ************************************ 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.843 14:30:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.386 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.386 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.386 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170507908 kB' 'MemAvailable: 173742136 kB' 'Buffers: 3896 kB' 'Cached: 14667064 kB' 'SwapCached: 0 kB' 'Active: 11549608 kB' 'Inactive: 3694312 kB' 'Active(anon): 11131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576056 kB' 'Mapped: 196992 kB' 'Shmem: 10558692 kB' 'KReclaimable: 532352 kB' 'Slab: 1187380 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655028 kB' 'KernelStack: 21056 kB' 'PageTables: 10092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12678396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317512 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.386 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.387 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170506720 kB' 'MemAvailable: 173740948 kB' 'Buffers: 3896 kB' 'Cached: 14667068 kB' 'SwapCached: 0 kB' 'Active: 11550068 kB' 'Inactive: 3694312 kB' 'Active(anon): 11132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576560 kB' 'Mapped: 197072 kB' 'Shmem: 10558696 kB' 'KReclaimable: 532352 kB' 'Slab: 1187412 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655060 kB' 'KernelStack: 21088 kB' 'PageTables: 10188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12678416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317480 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.388 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.389 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170503516 kB' 'MemAvailable: 173737744 kB' 'Buffers: 3896 kB' 'Cached: 14667084 kB' 'SwapCached: 0 kB' 'Active: 11549624 kB' 'Inactive: 3694312 kB' 'Active(anon): 11131668 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576136 kB' 'Mapped: 196944 kB' 'Shmem: 10558712 kB' 'KReclaimable: 532352 kB' 'Slab: 1187408 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655056 kB' 'KernelStack: 21184 kB' 'PageTables: 10336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12676948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317528 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.390 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.391 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.392 nr_hugepages=1024 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.392 resv_hugepages=0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.392 surplus_hugepages=0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.392 anon_hugepages=0 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.392 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170503024 kB' 'MemAvailable: 173737252 kB' 'Buffers: 3896 kB' 'Cached: 14667108 kB' 'SwapCached: 0 kB' 'Active: 11548276 kB' 'Inactive: 3694312 kB' 'Active(anon): 11130320 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574848 kB' 'Mapped: 196944 kB' 'Shmem: 10558736 kB' 'KReclaimable: 532352 kB' 'Slab: 1187400 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655048 kB' 'KernelStack: 20656 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12675844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.393 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.394 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92374988 kB' 'MemUsed: 5240640 kB' 'SwapCached: 0 kB' 'Active: 2542384 kB' 'Inactive: 219240 kB' 'Active(anon): 2380560 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633440 kB' 'Mapped: 72096 kB' 'AnonPages: 131364 kB' 'Shmem: 2252376 kB' 'KernelStack: 11688 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 666040 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.656 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.657 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78131540 kB' 'MemUsed: 15633968 kB' 'SwapCached: 0 kB' 'Active: 9005856 kB' 'Inactive: 3475072 kB' 'Active(anon): 8749724 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12037608 kB' 'Mapped: 124848 kB' 'AnonPages: 442880 kB' 'Shmem: 8306404 kB' 'KernelStack: 8936 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177924 kB' 'Slab: 521456 kB' 'SReclaimable: 177924 kB' 'SUnreclaim: 343532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.658 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.659 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.660 node0=512 expecting 512 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.660 node1=512 expecting 512 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.660 00:03:26.660 real 0m2.877s 00:03:26.660 user 0m1.157s 00:03:26.660 sys 0m1.782s 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.660 14:30:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.660 ************************************ 00:03:26.660 END TEST per_node_1G_alloc 00:03:26.660 ************************************ 00:03:26.660 14:30:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.660 14:30:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.660 14:30:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.660 14:30:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.660 14:30:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.660 ************************************ 00:03:26.660 START TEST even_2G_alloc 00:03:26.660 ************************************ 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.660 14:30:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.200 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.200 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.200 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170506180 kB' 'MemAvailable: 173740408 kB' 'Buffers: 3896 kB' 'Cached: 14667224 kB' 'SwapCached: 0 kB' 'Active: 11549508 kB' 'Inactive: 3694312 kB' 'Active(anon): 11131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576404 kB' 'Mapped: 196436 kB' 'Shmem: 10558852 kB' 'KReclaimable: 532352 kB' 'Slab: 1187248 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654896 kB' 'KernelStack: 20560 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12657996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317212 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.466 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.467 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170508092 kB' 'MemAvailable: 173742320 kB' 'Buffers: 3896 kB' 'Cached: 14667228 kB' 'SwapCached: 0 kB' 'Active: 11544748 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126792 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571204 kB' 'Mapped: 195932 kB' 'Shmem: 10558856 kB' 'KReclaimable: 532352 kB' 'Slab: 1187220 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654868 kB' 'KernelStack: 20560 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12651892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.468 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.469 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170507592 kB' 'MemAvailable: 173741820 kB' 'Buffers: 3896 kB' 'Cached: 14667244 kB' 'SwapCached: 0 kB' 'Active: 11544136 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126180 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570592 kB' 'Mapped: 195932 kB' 'Shmem: 10558872 kB' 'KReclaimable: 532352 kB' 'Slab: 1187208 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654856 kB' 'KernelStack: 20544 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12651912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.470 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.471 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.472 nr_hugepages=1024 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.472 resv_hugepages=0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.472 surplus_hugepages=0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.472 anon_hugepages=0 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170507592 kB' 'MemAvailable: 173741820 kB' 'Buffers: 3896 kB' 'Cached: 14667284 kB' 'SwapCached: 0 kB' 'Active: 11543820 kB' 'Inactive: 3694312 kB' 'Active(anon): 11125864 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570200 kB' 'Mapped: 195932 kB' 'Shmem: 10558912 kB' 'KReclaimable: 532352 kB' 'Slab: 1187208 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654856 kB' 'KernelStack: 20528 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12651936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.472 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.473 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92370436 kB' 'MemUsed: 5245192 kB' 'SwapCached: 0 kB' 'Active: 2540856 kB' 'Inactive: 219240 kB' 'Active(anon): 2379032 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633444 kB' 'Mapped: 71756 kB' 'AnonPages: 129916 kB' 'Shmem: 2252380 kB' 'KernelStack: 11688 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 665860 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.474 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.475 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78137408 kB' 'MemUsed: 15628100 kB' 'SwapCached: 0 kB' 'Active: 9003352 kB' 'Inactive: 3475072 kB' 'Active(anon): 8747220 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12037764 kB' 'Mapped: 124176 kB' 'AnonPages: 440744 kB' 'Shmem: 8306560 kB' 'KernelStack: 8856 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177924 kB' 'Slab: 521348 kB' 'SReclaimable: 177924 kB' 'SUnreclaim: 343424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.476 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.477 node0=512 expecting 512 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:29.477 node1=512 expecting 512 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:29.477 00:03:29.477 real 0m2.913s 00:03:29.477 user 0m1.199s 00:03:29.477 sys 0m1.780s 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.477 14:30:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.477 ************************************ 00:03:29.477 END TEST even_2G_alloc 00:03:29.477 ************************************ 00:03:29.739 14:30:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.739 14:30:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:29.739 14:30:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.739 14:30:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.739 14:30:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.739 ************************************ 00:03:29.739 START TEST odd_alloc 00:03:29.739 ************************************ 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.739 14:30:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.278 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.278 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.278 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.278 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170496816 kB' 'MemAvailable: 173731044 kB' 'Buffers: 3896 kB' 'Cached: 14667376 kB' 'SwapCached: 0 kB' 'Active: 11546428 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571928 kB' 'Mapped: 196048 kB' 'Shmem: 10559004 kB' 'KReclaimable: 532352 kB' 'Slab: 1187248 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654896 kB' 'KernelStack: 20544 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12652868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.279 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.280 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170497728 kB' 'MemAvailable: 173731956 kB' 'Buffers: 3896 kB' 'Cached: 14667380 kB' 'SwapCached: 0 kB' 'Active: 11545248 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127292 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571764 kB' 'Mapped: 195944 kB' 'Shmem: 10559008 kB' 'KReclaimable: 532352 kB' 'Slab: 1187164 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654812 kB' 'KernelStack: 20544 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12652884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.545 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.546 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170498232 kB' 'MemAvailable: 173732460 kB' 'Buffers: 3896 kB' 'Cached: 14667380 kB' 'SwapCached: 0 kB' 'Active: 11545224 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571764 kB' 'Mapped: 195944 kB' 'Shmem: 10559008 kB' 'KReclaimable: 532352 kB' 'Slab: 1187164 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654812 kB' 'KernelStack: 20544 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12652904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.547 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.548 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:32.549 nr_hugepages=1025 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.549 resv_hugepages=0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.549 surplus_hugepages=0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.549 anon_hugepages=0 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170498048 kB' 'MemAvailable: 173732276 kB' 'Buffers: 3896 kB' 'Cached: 14667416 kB' 'SwapCached: 0 kB' 'Active: 11545324 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127368 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571292 kB' 'Mapped: 195944 kB' 'Shmem: 10559044 kB' 'KReclaimable: 532352 kB' 'Slab: 1187164 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654812 kB' 'KernelStack: 20528 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12652924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.549 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.550 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92352668 kB' 'MemUsed: 5262960 kB' 'SwapCached: 0 kB' 'Active: 2541296 kB' 'Inactive: 219240 kB' 'Active(anon): 2379472 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633444 kB' 'Mapped: 71768 kB' 'AnonPages: 130368 kB' 'Shmem: 2252380 kB' 'KernelStack: 11672 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 665668 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.551 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.552 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78144876 kB' 'MemUsed: 15620632 kB' 'SwapCached: 0 kB' 'Active: 9004072 kB' 'Inactive: 3475072 kB' 'Active(anon): 8747940 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12037912 kB' 'Mapped: 124176 kB' 'AnonPages: 441388 kB' 'Shmem: 8306708 kB' 'KernelStack: 8872 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177924 kB' 'Slab: 521496 kB' 'SReclaimable: 177924 kB' 'SUnreclaim: 343572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.553 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.554 node0=512 expecting 513 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.554 node1=513 expecting 512 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.554 00:03:32.554 real 0m2.925s 00:03:32.554 user 0m1.199s 00:03:32.554 sys 0m1.791s 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.554 14:30:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.554 ************************************ 00:03:32.554 END TEST odd_alloc 00:03:32.554 ************************************ 00:03:32.554 14:30:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.554 14:30:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.554 14:30:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.554 14:30:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.554 14:30:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.554 ************************************ 00:03:32.554 START TEST custom_alloc 00:03:32.554 ************************************ 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.554 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.555 14:30:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.099 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.099 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.099 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169450096 kB' 'MemAvailable: 172684324 kB' 'Buffers: 3896 kB' 'Cached: 14667520 kB' 'SwapCached: 0 kB' 'Active: 11546060 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572300 kB' 'Mapped: 195984 kB' 'Shmem: 10559148 kB' 'KReclaimable: 532352 kB' 'Slab: 1187756 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655404 kB' 'KernelStack: 20576 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12653272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.099 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.100 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169449820 kB' 'MemAvailable: 172684048 kB' 'Buffers: 3896 kB' 'Cached: 14667524 kB' 'SwapCached: 0 kB' 'Active: 11545736 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127780 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571976 kB' 'Mapped: 195960 kB' 'Shmem: 10559152 kB' 'KReclaimable: 532352 kB' 'Slab: 1187844 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655492 kB' 'KernelStack: 20576 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12653292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.101 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.102 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.103 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.368 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169452116 kB' 'MemAvailable: 172686344 kB' 'Buffers: 3896 kB' 'Cached: 14667540 kB' 'SwapCached: 0 kB' 'Active: 11545752 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127796 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571976 kB' 'Mapped: 195960 kB' 'Shmem: 10559168 kB' 'KReclaimable: 532352 kB' 'Slab: 1187836 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655484 kB' 'KernelStack: 20576 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12653312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.369 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.370 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:35.371 nr_hugepages=1536 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.371 resv_hugepages=0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.371 surplus_hugepages=0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.371 anon_hugepages=0 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169452304 kB' 'MemAvailable: 172686532 kB' 'Buffers: 3896 kB' 'Cached: 14667560 kB' 'SwapCached: 0 kB' 'Active: 11546020 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128064 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572252 kB' 'Mapped: 195964 kB' 'Shmem: 10559188 kB' 'KReclaimable: 532352 kB' 'Slab: 1187836 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655484 kB' 'KernelStack: 20560 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12653336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.371 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.372 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92347340 kB' 'MemUsed: 5268288 kB' 'SwapCached: 0 kB' 'Active: 2543900 kB' 'Inactive: 219240 kB' 'Active(anon): 2382076 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633456 kB' 'Mapped: 71780 kB' 'AnonPages: 132956 kB' 'Shmem: 2252392 kB' 'KernelStack: 11688 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 666180 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.373 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.374 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77104460 kB' 'MemUsed: 16661048 kB' 'SwapCached: 0 kB' 'Active: 9002308 kB' 'Inactive: 3475072 kB' 'Active(anon): 8746176 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3475072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12038040 kB' 'Mapped: 124184 kB' 'AnonPages: 439376 kB' 'Shmem: 8306836 kB' 'KernelStack: 8872 kB' 'PageTables: 5312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177924 kB' 'Slab: 521656 kB' 'SReclaimable: 177924 kB' 'SUnreclaim: 343732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.375 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.376 node0=512 expecting 512 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:35.376 node1=1024 expecting 1024 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:35.376 00:03:35.376 real 0m2.730s 00:03:35.376 user 0m1.063s 00:03:35.376 sys 0m1.707s 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.376 14:30:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 ************************************ 00:03:35.376 END TEST custom_alloc 00:03:35.376 ************************************ 00:03:35.376 14:30:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.376 14:30:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:35.376 14:30:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.376 14:30:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.376 14:30:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.376 ************************************ 00:03:35.376 START TEST no_shrink_alloc 00:03:35.376 ************************************ 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.376 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.377 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.377 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.377 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:35.377 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.377 14:30:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.680 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.680 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.680 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170468696 kB' 'MemAvailable: 173702924 kB' 'Buffers: 3896 kB' 'Cached: 14667676 kB' 'SwapCached: 0 kB' 'Active: 11546872 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572996 kB' 'Mapped: 195984 kB' 'Shmem: 10559304 kB' 'KReclaimable: 532352 kB' 'Slab: 1187540 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655188 kB' 'KernelStack: 20560 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170480296 kB' 'MemAvailable: 173714524 kB' 'Buffers: 3896 kB' 'Cached: 14667680 kB' 'SwapCached: 0 kB' 'Active: 11546484 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572556 kB' 'Mapped: 195976 kB' 'Shmem: 10559308 kB' 'KReclaimable: 532352 kB' 'Slab: 1187560 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655208 kB' 'KernelStack: 20560 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170480676 kB' 'MemAvailable: 173714904 kB' 'Buffers: 3896 kB' 'Cached: 14667692 kB' 'SwapCached: 0 kB' 'Active: 11546488 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572520 kB' 'Mapped: 195976 kB' 'Shmem: 10559320 kB' 'KReclaimable: 532352 kB' 'Slab: 1187560 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655208 kB' 'KernelStack: 20544 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.686 nr_hugepages=1024 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.686 resv_hugepages=0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.686 surplus_hugepages=0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.686 anon_hugepages=0 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.686 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170480172 kB' 'MemAvailable: 173714400 kB' 'Buffers: 3896 kB' 'Cached: 14667692 kB' 'SwapCached: 0 kB' 'Active: 11546528 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572612 kB' 'Mapped: 195976 kB' 'Shmem: 10559320 kB' 'KReclaimable: 532352 kB' 'Slab: 1187560 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 655208 kB' 'KernelStack: 20560 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.687 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91288068 kB' 'MemUsed: 6327560 kB' 'SwapCached: 0 kB' 'Active: 2545360 kB' 'Inactive: 219240 kB' 'Active(anon): 2383536 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633488 kB' 'Mapped: 71796 kB' 'AnonPages: 134244 kB' 'Shmem: 2252424 kB' 'KernelStack: 11656 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 666136 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.689 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.690 node0=1024 expecting 1024 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.690 14:30:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.313 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.313 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.313 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.313 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:41.313 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:41.313 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.313 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.313 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170491768 kB' 'MemAvailable: 173725996 kB' 'Buffers: 3896 kB' 'Cached: 14667804 kB' 'SwapCached: 0 kB' 'Active: 11546016 kB' 'Inactive: 3694312 kB' 'Active(anon): 11128060 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571348 kB' 'Mapped: 196080 kB' 'Shmem: 10559432 kB' 'KReclaimable: 532352 kB' 'Slab: 1186600 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654248 kB' 'KernelStack: 20576 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.314 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170491516 kB' 'MemAvailable: 173725744 kB' 'Buffers: 3896 kB' 'Cached: 14667808 kB' 'SwapCached: 0 kB' 'Active: 11545236 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571084 kB' 'Mapped: 195984 kB' 'Shmem: 10559436 kB' 'KReclaimable: 532352 kB' 'Slab: 1186572 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654220 kB' 'KernelStack: 20560 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.315 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.316 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170491516 kB' 'MemAvailable: 173725744 kB' 'Buffers: 3896 kB' 'Cached: 14667828 kB' 'SwapCached: 0 kB' 'Active: 11545304 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571084 kB' 'Mapped: 195984 kB' 'Shmem: 10559456 kB' 'KReclaimable: 532352 kB' 'Slab: 1186572 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654220 kB' 'KernelStack: 20544 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.317 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.318 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.319 nr_hugepages=1024 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.319 resv_hugepages=0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.319 surplus_hugepages=0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.319 anon_hugepages=0 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.319 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170491264 kB' 'MemAvailable: 173725492 kB' 'Buffers: 3896 kB' 'Cached: 14667868 kB' 'SwapCached: 0 kB' 'Active: 11544760 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126804 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570504 kB' 'Mapped: 195984 kB' 'Shmem: 10559496 kB' 'KReclaimable: 532352 kB' 'Slab: 1186572 kB' 'SReclaimable: 532352 kB' 'SUnreclaim: 654220 kB' 'KernelStack: 20512 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 120960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.320 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.321 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91295260 kB' 'MemUsed: 6320368 kB' 'SwapCached: 0 kB' 'Active: 2543772 kB' 'Inactive: 219240 kB' 'Active(anon): 2381948 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2633492 kB' 'Mapped: 71804 kB' 'AnonPages: 132676 kB' 'Shmem: 2252428 kB' 'KernelStack: 11688 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 354428 kB' 'Slab: 665456 kB' 'SReclaimable: 354428 kB' 'SUnreclaim: 311028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.322 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.323 node0=1024 expecting 1024 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.323 00:03:41.323 real 0m5.717s 00:03:41.323 user 0m2.294s 00:03:41.323 sys 0m3.552s 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.323 14:31:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.323 ************************************ 00:03:41.323 END TEST no_shrink_alloc 00:03:41.323 ************************************ 00:03:41.323 14:31:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:41.323 14:31:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:41.323 00:03:41.323 real 0m21.474s 00:03:41.323 user 0m8.271s 00:03:41.323 sys 0m12.793s 00:03:41.323 14:31:01 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.323 14:31:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.323 ************************************ 00:03:41.323 END TEST hugepages 00:03:41.323 ************************************ 00:03:41.323 14:31:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.323 14:31:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:41.323 14:31:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.323 14:31:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.323 14:31:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.323 ************************************ 00:03:41.323 START TEST driver 00:03:41.323 ************************************ 00:03:41.323 14:31:01 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:41.323 * Looking for test storage... 00:03:41.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.323 14:31:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:41.323 14:31:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.323 14:31:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.520 14:31:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:45.520 14:31:05 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.520 14:31:05 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.520 14:31:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.520 ************************************ 00:03:45.520 START TEST guess_driver 00:03:45.520 ************************************ 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:45.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:45.520 Looking for driver=vfio-pci 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.520 14:31:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.059 14:31:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.997 14:31:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.195 00:03:53.195 real 0m7.690s 00:03:53.195 user 0m2.244s 00:03:53.195 sys 0m3.915s 00:03:53.195 14:31:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.195 14:31:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.195 ************************************ 00:03:53.195 END TEST guess_driver 00:03:53.195 ************************************ 00:03:53.195 14:31:13 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:53.195 00:03:53.195 real 0m11.751s 00:03:53.195 user 0m3.422s 00:03:53.195 sys 0m6.092s 00:03:53.195 14:31:13 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.195 14:31:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.195 ************************************ 00:03:53.195 END TEST driver 00:03:53.195 ************************************ 00:03:53.195 14:31:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.195 14:31:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.195 14:31:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.195 14:31:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.195 14:31:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.195 ************************************ 00:03:53.195 START TEST devices 00:03:53.195 ************************************ 00:03:53.195 14:31:13 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.195 * Looking for test storage... 00:03:53.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.195 14:31:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.195 14:31:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:53.195 14:31:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.195 14:31:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:56.489 14:31:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.489 No valid GPT data, bailing 00:03:56.489 14:31:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:56.489 14:31:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.489 14:31:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.489 14:31:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.489 ************************************ 00:03:56.489 START TEST nvme_mount 00:03:56.489 ************************************ 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.489 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.490 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.490 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.490 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.490 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.490 14:31:16 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.430 Creating new GPT entries in memory. 00:03:57.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.430 other utilities. 00:03:57.430 14:31:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.430 14:31:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.430 14:31:17 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.430 14:31:17 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.430 14:31:17 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.370 Creating new GPT entries in memory. 00:03:58.370 The operation has completed successfully. 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2128574 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.370 14:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.661 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.661 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.662 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.662 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.662 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.662 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.662 14:31:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.256 14:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.796 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.796 00:04:06.796 real 0m10.240s 00:04:06.796 user 0m2.906s 00:04:06.796 sys 0m5.132s 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.796 14:31:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.796 ************************************ 00:04:06.796 END TEST nvme_mount 00:04:06.796 ************************************ 00:04:06.796 14:31:26 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.796 14:31:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:06.796 14:31:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.796 14:31:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.796 14:31:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.796 ************************************ 00:04:06.796 START TEST dm_mount 00:04:06.796 ************************************ 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:06.796 14:31:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:07.729 Creating new GPT entries in memory. 00:04:07.729 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.729 other utilities. 00:04:07.729 14:31:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.729 14:31:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.729 14:31:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.729 14:31:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.729 14:31:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:08.667 Creating new GPT entries in memory. 00:04:08.667 The operation has completed successfully. 00:04:08.667 14:31:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:08.667 14:31:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.667 14:31:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.667 14:31:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.667 14:31:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:09.604 The operation has completed successfully. 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2132533 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:09.604 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.863 14:31:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.399 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.400 14:31:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.935 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.936 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.195 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.195 00:04:15.195 real 0m8.531s 00:04:15.195 user 0m2.074s 00:04:15.195 sys 0m3.472s 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.195 14:31:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:15.195 ************************************ 00:04:15.195 END TEST dm_mount 00:04:15.195 ************************************ 00:04:15.195 14:31:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.195 14:31:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.454 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.454 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.454 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.454 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.454 14:31:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.454 00:04:15.454 real 0m22.423s 00:04:15.454 user 0m6.208s 00:04:15.454 sys 0m10.895s 00:04:15.454 14:31:35 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.454 14:31:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.454 ************************************ 00:04:15.454 END TEST devices 00:04:15.454 ************************************ 00:04:15.454 14:31:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:15.454 00:04:15.454 real 1m15.484s 00:04:15.454 user 0m24.765s 00:04:15.454 sys 0m41.576s 00:04:15.454 14:31:35 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.454 14:31:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.454 ************************************ 00:04:15.454 END TEST setup.sh 00:04:15.454 ************************************ 00:04:15.454 14:31:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.454 14:31:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.745 Hugepages 00:04:18.745 node hugesize free / total 00:04:18.745 node0 1048576kB 0 / 0 00:04:18.745 node0 2048kB 2048 / 2048 00:04:18.745 node1 1048576kB 0 / 0 00:04:18.745 node1 2048kB 0 / 0 00:04:18.745 00:04:18.745 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.745 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:18.745 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:18.745 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:18.745 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:18.745 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:18.745 14:31:38 -- spdk/autotest.sh@130 -- # uname -s 00:04:18.745 14:31:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:18.745 14:31:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:18.745 14:31:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.282 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.282 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.852 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:21.852 14:31:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:23.245 14:31:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:23.245 14:31:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:23.245 14:31:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.245 14:31:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:23.245 14:31:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:23.245 14:31:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:23.246 14:31:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.246 14:31:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.246 14:31:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:23.246 14:31:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:23.246 14:31:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:23.246 14:31:43 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.788 Waiting for block devices as requested 00:04:25.788 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:25.788 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.788 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.788 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.048 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.048 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.048 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:26.048 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.308 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.308 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:26.308 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:26.308 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.567 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.567 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.567 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:26.827 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.827 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.827 14:31:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:26.827 14:31:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:26.827 14:31:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:26.827 14:31:47 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:26.827 14:31:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.827 14:31:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:26.827 14:31:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.828 14:31:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:26.828 14:31:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:26.828 14:31:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:26.828 14:31:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:26.828 14:31:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:26.828 14:31:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:26.828 14:31:47 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:26.828 14:31:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:26.828 14:31:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:26.828 14:31:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:26.828 14:31:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:26.828 14:31:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:26.828 14:31:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:26.828 14:31:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:26.828 14:31:47 -- common/autotest_common.sh@1557 -- # continue 00:04:26.828 14:31:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:26.828 14:31:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.828 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.828 14:31:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:26.828 14:31:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.828 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.828 14:31:47 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.400 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.400 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.340 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.340 14:31:50 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:30.340 14:31:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.340 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:30.600 14:31:50 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:30.600 14:31:50 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:30.600 14:31:50 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.600 14:31:50 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:30.600 14:31:50 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:30.600 14:31:50 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:30.600 14:31:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:30.600 14:31:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:30.600 14:31:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.600 14:31:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:30.600 14:31:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:30.600 14:31:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:30.600 14:31:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:30.600 14:31:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:30.600 14:31:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:30.600 14:31:50 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:30.600 14:31:50 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.600 14:31:50 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:30.600 14:31:50 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:30.600 14:31:50 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:30.600 14:31:50 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2141279 00:04:30.600 14:31:50 -- common/autotest_common.sh@1598 -- # waitforlisten 2141279 00:04:30.600 14:31:50 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.600 14:31:50 -- common/autotest_common.sh@829 -- # '[' -z 2141279 ']' 00:04:30.601 14:31:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.601 14:31:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.601 14:31:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.601 14:31:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.601 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:30.601 [2024-07-25 14:31:50.777768] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:30.601 [2024-07-25 14:31:50.777815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141279 ] 00:04:30.601 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.601 [2024-07-25 14:31:50.830007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.860 [2024-07-25 14:31:50.913598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.429 14:31:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.429 14:31:51 -- common/autotest_common.sh@862 -- # return 0 00:04:31.429 14:31:51 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:31.429 14:31:51 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:31.429 14:31:51 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:34.721 nvme0n1 00:04:34.721 14:31:54 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:34.721 [2024-07-25 14:31:54.748032] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:34.721 request: 00:04:34.721 { 00:04:34.721 "nvme_ctrlr_name": "nvme0", 00:04:34.721 "password": "test", 00:04:34.721 "method": "bdev_nvme_opal_revert", 00:04:34.721 "req_id": 1 00:04:34.721 } 00:04:34.721 Got JSON-RPC error response 00:04:34.721 response: 00:04:34.721 { 00:04:34.721 "code": -32602, 00:04:34.721 "message": "Invalid parameters" 00:04:34.721 } 00:04:34.721 14:31:54 -- common/autotest_common.sh@1604 -- # true 00:04:34.721 14:31:54 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:34.721 14:31:54 -- common/autotest_common.sh@1608 -- # killprocess 2141279 00:04:34.721 14:31:54 -- common/autotest_common.sh@948 -- # '[' -z 2141279 ']' 00:04:34.721 14:31:54 -- common/autotest_common.sh@952 -- # kill -0 2141279 00:04:34.721 14:31:54 -- common/autotest_common.sh@953 -- # uname 00:04:34.721 14:31:54 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.721 14:31:54 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2141279 00:04:34.721 14:31:54 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.721 14:31:54 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.721 14:31:54 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2141279' 00:04:34.721 killing process with pid 2141279 00:04:34.721 14:31:54 -- common/autotest_common.sh@967 -- # kill 2141279 00:04:34.721 14:31:54 -- common/autotest_common.sh@972 -- # wait 2141279 00:04:36.627 14:31:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:36.627 14:31:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:36.627 14:31:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.627 14:31:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.627 14:31:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:36.627 14:31:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.627 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:04:36.627 14:31:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:36.627 14:31:56 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.627 14:31:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.627 14:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.627 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:04:36.627 ************************************ 00:04:36.627 START TEST env 00:04:36.627 ************************************ 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.627 * Looking for test storage... 00:04:36.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:36.627 14:31:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.627 14:31:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.627 ************************************ 00:04:36.627 START TEST env_memory 00:04:36.627 ************************************ 00:04:36.627 14:31:56 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.627 00:04:36.627 00:04:36.627 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.627 http://cunit.sourceforge.net/ 00:04:36.627 00:04:36.627 00:04:36.627 Suite: memory 00:04:36.627 Test: alloc and free memory map ...[2024-07-25 14:31:56.594958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.627 passed 00:04:36.627 Test: mem map translation ...[2024-07-25 14:31:56.614021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.627 [2024-07-25 14:31:56.614036] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.627 [2024-07-25 14:31:56.614075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.627 [2024-07-25 14:31:56.614082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.627 passed 00:04:36.627 Test: mem map registration ...[2024-07-25 14:31:56.652754] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:36.627 [2024-07-25 14:31:56.652769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:36.627 passed 00:04:36.627 Test: mem map adjacent registrations ...passed 00:04:36.627 00:04:36.627 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.627 suites 1 1 n/a 0 0 00:04:36.627 tests 4 4 4 0 0 00:04:36.627 asserts 152 152 152 0 n/a 00:04:36.627 00:04:36.627 Elapsed time = 0.138 seconds 00:04:36.627 00:04:36.627 real 0m0.150s 00:04:36.627 user 0m0.142s 00:04:36.627 sys 0m0.007s 00:04:36.627 14:31:56 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.627 14:31:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.627 ************************************ 00:04:36.627 END TEST env_memory 00:04:36.627 ************************************ 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.627 14:31:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.627 14:31:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.627 14:31:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.627 ************************************ 00:04:36.627 START TEST env_vtophys 00:04:36.627 ************************************ 00:04:36.627 14:31:56 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.627 EAL: lib.eal log level changed from notice to debug 00:04:36.627 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.627 EAL: Detected lcore 1 as core 1 on socket 0 00:04:36.627 EAL: Detected lcore 2 as core 2 on socket 0 00:04:36.627 EAL: Detected lcore 3 as core 3 on socket 0 00:04:36.627 EAL: Detected lcore 4 as core 4 on socket 0 00:04:36.627 EAL: Detected lcore 5 as core 5 on socket 0 00:04:36.627 EAL: Detected lcore 6 as core 6 on socket 0 00:04:36.627 EAL: Detected lcore 7 as core 8 on socket 0 00:04:36.627 EAL: Detected lcore 8 as core 9 on socket 0 00:04:36.627 EAL: Detected lcore 9 as core 10 on socket 0 00:04:36.627 EAL: Detected lcore 10 as core 11 on socket 0 00:04:36.627 EAL: Detected lcore 11 as core 12 on socket 0 00:04:36.627 EAL: Detected lcore 12 as core 13 on socket 0 00:04:36.627 EAL: Detected lcore 13 as core 16 on socket 0 00:04:36.627 EAL: Detected lcore 14 as core 17 on socket 0 00:04:36.627 EAL: Detected lcore 15 as core 18 on socket 0 00:04:36.627 EAL: Detected lcore 16 as core 19 on socket 0 00:04:36.627 EAL: Detected lcore 17 as core 20 on socket 0 00:04:36.627 EAL: Detected lcore 18 as core 21 on socket 0 00:04:36.627 EAL: Detected lcore 19 as core 25 on socket 0 00:04:36.627 EAL: Detected lcore 20 as core 26 on socket 0 00:04:36.627 EAL: Detected lcore 21 as core 27 on socket 0 00:04:36.627 EAL: Detected lcore 22 as core 28 on socket 0 00:04:36.627 EAL: Detected lcore 23 as core 29 on socket 0 00:04:36.627 EAL: Detected lcore 24 as core 0 on socket 1 00:04:36.627 EAL: Detected lcore 25 as core 1 on socket 1 00:04:36.627 EAL: Detected lcore 26 as core 2 on socket 1 00:04:36.627 EAL: Detected lcore 27 as core 3 on socket 1 00:04:36.627 EAL: Detected lcore 28 as core 4 on socket 1 00:04:36.627 EAL: Detected lcore 29 as core 5 on socket 1 00:04:36.627 EAL: Detected lcore 30 as core 6 on socket 1 00:04:36.627 EAL: Detected lcore 31 as core 9 on socket 1 00:04:36.627 EAL: Detected lcore 32 as core 10 on socket 1 00:04:36.627 EAL: Detected lcore 33 as core 11 on socket 1 00:04:36.627 EAL: Detected lcore 34 as core 12 on socket 1 00:04:36.627 EAL: Detected lcore 35 as core 13 on socket 1 00:04:36.627 EAL: Detected lcore 36 as core 16 on socket 1 00:04:36.627 EAL: Detected lcore 37 as core 17 on socket 1 00:04:36.627 EAL: Detected lcore 38 as core 18 on socket 1 00:04:36.627 EAL: Detected lcore 39 as core 19 on socket 1 00:04:36.627 EAL: Detected lcore 40 as core 20 on socket 1 00:04:36.627 EAL: Detected lcore 41 as core 21 on socket 1 00:04:36.627 EAL: Detected lcore 42 as core 24 on socket 1 00:04:36.627 EAL: Detected lcore 43 as core 25 on socket 1 00:04:36.627 EAL: Detected lcore 44 as core 26 on socket 1 00:04:36.627 EAL: Detected lcore 45 as core 27 on socket 1 00:04:36.627 EAL: Detected lcore 46 as core 28 on socket 1 00:04:36.627 EAL: Detected lcore 47 as core 29 on socket 1 00:04:36.627 EAL: Detected lcore 48 as core 0 on socket 0 00:04:36.627 EAL: Detected lcore 49 as core 1 on socket 0 00:04:36.627 EAL: Detected lcore 50 as core 2 on socket 0 00:04:36.627 EAL: Detected lcore 51 as core 3 on socket 0 00:04:36.627 EAL: Detected lcore 52 as core 4 on socket 0 00:04:36.627 EAL: Detected lcore 53 as core 5 on socket 0 00:04:36.627 EAL: Detected lcore 54 as core 6 on socket 0 00:04:36.627 EAL: Detected lcore 55 as core 8 on socket 0 00:04:36.627 EAL: Detected lcore 56 as core 9 on socket 0 00:04:36.627 EAL: Detected lcore 57 as core 10 on socket 0 00:04:36.627 EAL: Detected lcore 58 as core 11 on socket 0 00:04:36.627 EAL: Detected lcore 59 as core 12 on socket 0 00:04:36.627 EAL: Detected lcore 60 as core 13 on socket 0 00:04:36.627 EAL: Detected lcore 61 as core 16 on socket 0 00:04:36.627 EAL: Detected lcore 62 as core 17 on socket 0 00:04:36.627 EAL: Detected lcore 63 as core 18 on socket 0 00:04:36.627 EAL: Detected lcore 64 as core 19 on socket 0 00:04:36.627 EAL: Detected lcore 65 as core 20 on socket 0 00:04:36.627 EAL: Detected lcore 66 as core 21 on socket 0 00:04:36.627 EAL: Detected lcore 67 as core 25 on socket 0 00:04:36.627 EAL: Detected lcore 68 as core 26 on socket 0 00:04:36.627 EAL: Detected lcore 69 as core 27 on socket 0 00:04:36.627 EAL: Detected lcore 70 as core 28 on socket 0 00:04:36.627 EAL: Detected lcore 71 as core 29 on socket 0 00:04:36.627 EAL: Detected lcore 72 as core 0 on socket 1 00:04:36.627 EAL: Detected lcore 73 as core 1 on socket 1 00:04:36.627 EAL: Detected lcore 74 as core 2 on socket 1 00:04:36.627 EAL: Detected lcore 75 as core 3 on socket 1 00:04:36.627 EAL: Detected lcore 76 as core 4 on socket 1 00:04:36.627 EAL: Detected lcore 77 as core 5 on socket 1 00:04:36.627 EAL: Detected lcore 78 as core 6 on socket 1 00:04:36.627 EAL: Detected lcore 79 as core 9 on socket 1 00:04:36.627 EAL: Detected lcore 80 as core 10 on socket 1 00:04:36.627 EAL: Detected lcore 81 as core 11 on socket 1 00:04:36.627 EAL: Detected lcore 82 as core 12 on socket 1 00:04:36.627 EAL: Detected lcore 83 as core 13 on socket 1 00:04:36.627 EAL: Detected lcore 84 as core 16 on socket 1 00:04:36.627 EAL: Detected lcore 85 as core 17 on socket 1 00:04:36.628 EAL: Detected lcore 86 as core 18 on socket 1 00:04:36.628 EAL: Detected lcore 87 as core 19 on socket 1 00:04:36.628 EAL: Detected lcore 88 as core 20 on socket 1 00:04:36.628 EAL: Detected lcore 89 as core 21 on socket 1 00:04:36.628 EAL: Detected lcore 90 as core 24 on socket 1 00:04:36.628 EAL: Detected lcore 91 as core 25 on socket 1 00:04:36.628 EAL: Detected lcore 92 as core 26 on socket 1 00:04:36.628 EAL: Detected lcore 93 as core 27 on socket 1 00:04:36.628 EAL: Detected lcore 94 as core 28 on socket 1 00:04:36.628 EAL: Detected lcore 95 as core 29 on socket 1 00:04:36.628 EAL: Maximum logical cores by configuration: 128 00:04:36.628 EAL: Detected CPU lcores: 96 00:04:36.628 EAL: Detected NUMA nodes: 2 00:04:36.628 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.628 EAL: Detected shared linkage of DPDK 00:04:36.628 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.628 EAL: Bus pci wants IOVA as 'DC' 00:04:36.628 EAL: Buses did not request a specific IOVA mode. 00:04:36.628 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:36.628 EAL: Selected IOVA mode 'VA' 00:04:36.628 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.628 EAL: Probing VFIO support... 00:04:36.628 EAL: IOMMU type 1 (Type 1) is supported 00:04:36.628 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:36.628 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:36.628 EAL: VFIO support initialized 00:04:36.628 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.628 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.628 EAL: Setting up physically contiguous memory... 00:04:36.628 EAL: Setting maximum number of open files to 524288 00:04:36.628 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.628 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:36.628 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.628 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:36.628 EAL: Hugepages will be freed exactly as allocated. 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: TSC frequency is ~2300000 KHz 00:04:36.628 EAL: Main lcore 0 is ready (tid=7f0752af2a00;cpuset=[0]) 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 0 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.628 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.628 00:04:36.628 00:04:36.628 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.628 http://cunit.sourceforge.net/ 00:04:36.628 00:04:36.628 00:04:36.628 Suite: components_suite 00:04:36.628 Test: vtophys_malloc_test ...passed 00:04:36.628 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.628 EAL: Trying to obtain current memory policy. 00:04:36.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.628 EAL: Restoring previous memory policy: 4 00:04:36.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.628 EAL: request: mp_malloc_sync 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.888 EAL: request: mp_malloc_sync 00:04:36.888 EAL: No shared files mode enabled, IPC is disabled 00:04:36.888 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.888 EAL: Trying to obtain current memory policy. 00:04:36.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.888 EAL: Restoring previous memory policy: 4 00:04:36.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.888 EAL: request: mp_malloc_sync 00:04:36.888 EAL: No shared files mode enabled, IPC is disabled 00:04:36.888 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.888 EAL: request: mp_malloc_sync 00:04:36.888 EAL: No shared files mode enabled, IPC is disabled 00:04:36.888 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.888 EAL: Trying to obtain current memory policy. 00:04:36.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.888 EAL: Restoring previous memory policy: 4 00:04:36.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.888 EAL: request: mp_malloc_sync 00:04:36.888 EAL: No shared files mode enabled, IPC is disabled 00:04:36.888 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.148 EAL: request: mp_malloc_sync 00:04:37.148 EAL: No shared files mode enabled, IPC is disabled 00:04:37.148 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.148 EAL: Trying to obtain current memory policy. 00:04:37.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.407 EAL: Restoring previous memory policy: 4 00:04:37.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.407 EAL: request: mp_malloc_sync 00:04:37.407 EAL: No shared files mode enabled, IPC is disabled 00:04:37.407 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.666 EAL: request: mp_malloc_sync 00:04:37.666 EAL: No shared files mode enabled, IPC is disabled 00:04:37.666 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.666 passed 00:04:37.666 00:04:37.666 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.666 suites 1 1 n/a 0 0 00:04:37.666 tests 2 2 2 0 0 00:04:37.666 asserts 497 497 497 0 n/a 00:04:37.666 00:04:37.666 Elapsed time = 0.960 seconds 00:04:37.666 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.666 EAL: request: mp_malloc_sync 00:04:37.666 EAL: No shared files mode enabled, IPC is disabled 00:04:37.666 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.666 EAL: No shared files mode enabled, IPC is disabled 00:04:37.666 EAL: No shared files mode enabled, IPC is disabled 00:04:37.666 EAL: No shared files mode enabled, IPC is disabled 00:04:37.666 00:04:37.666 real 0m1.072s 00:04:37.666 user 0m0.630s 00:04:37.666 sys 0m0.406s 00:04:37.666 14:31:57 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.666 14:31:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.666 ************************************ 00:04:37.666 END TEST env_vtophys 00:04:37.666 ************************************ 00:04:37.666 14:31:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.666 14:31:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.666 14:31:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.666 14:31:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.666 14:31:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.666 ************************************ 00:04:37.666 START TEST env_pci 00:04:37.666 ************************************ 00:04:37.666 14:31:57 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.666 00:04:37.666 00:04:37.666 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.666 http://cunit.sourceforge.net/ 00:04:37.666 00:04:37.666 00:04:37.666 Suite: pci 00:04:37.666 Test: pci_hook ...[2024-07-25 14:31:57.904907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2142640 has claimed it 00:04:37.666 EAL: Cannot find device (10000:00:01.0) 00:04:37.666 EAL: Failed to attach device on primary process 00:04:37.666 passed 00:04:37.666 00:04:37.666 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.666 suites 1 1 n/a 0 0 00:04:37.666 tests 1 1 1 0 0 00:04:37.666 asserts 25 25 25 0 n/a 00:04:37.666 00:04:37.666 Elapsed time = 0.019 seconds 00:04:37.666 00:04:37.666 real 0m0.029s 00:04:37.666 user 0m0.012s 00:04:37.666 sys 0m0.018s 00:04:37.666 14:31:57 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.666 14:31:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.666 ************************************ 00:04:37.666 END TEST env_pci 00:04:37.666 ************************************ 00:04:37.666 14:31:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.666 14:31:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.666 14:31:57 env -- env/env.sh@15 -- # uname 00:04:37.926 14:31:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.926 14:31:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.926 14:31:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.926 14:31:57 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:37.926 14:31:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.926 14:31:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.926 ************************************ 00:04:37.926 START TEST env_dpdk_post_init 00:04:37.926 ************************************ 00:04:37.926 14:31:57 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.926 EAL: Detected CPU lcores: 96 00:04:37.926 EAL: Detected NUMA nodes: 2 00:04:37.926 EAL: Detected shared linkage of DPDK 00:04:37.926 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.926 EAL: Selected IOVA mode 'VA' 00:04:37.926 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.926 EAL: VFIO support initialized 00:04:37.926 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.926 EAL: Using IOMMU type 1 (Type 1) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:37.926 EAL: Ignore mapping IO port bar(1) 00:04:37.926 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:38.865 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:38.865 EAL: Ignore mapping IO port bar(1) 00:04:38.865 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:42.156 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:42.156 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:42.156 Starting DPDK initialization... 00:04:42.156 Starting SPDK post initialization... 00:04:42.156 SPDK NVMe probe 00:04:42.156 Attaching to 0000:5e:00.0 00:04:42.156 Attached to 0000:5e:00.0 00:04:42.156 Cleaning up... 00:04:42.156 00:04:42.156 real 0m4.302s 00:04:42.156 user 0m3.275s 00:04:42.156 sys 0m0.101s 00:04:42.156 14:32:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.156 14:32:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.156 ************************************ 00:04:42.156 END TEST env_dpdk_post_init 00:04:42.156 ************************************ 00:04:42.156 14:32:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.156 14:32:02 env -- env/env.sh@26 -- # uname 00:04:42.156 14:32:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.156 14:32:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.156 14:32:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.156 14:32:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.156 14:32:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.156 ************************************ 00:04:42.156 START TEST env_mem_callbacks 00:04:42.156 ************************************ 00:04:42.156 14:32:02 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.156 EAL: Detected CPU lcores: 96 00:04:42.156 EAL: Detected NUMA nodes: 2 00:04:42.156 EAL: Detected shared linkage of DPDK 00:04:42.156 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.156 EAL: Selected IOVA mode 'VA' 00:04:42.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.156 EAL: VFIO support initialized 00:04:42.156 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.156 00:04:42.156 00:04:42.156 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.156 http://cunit.sourceforge.net/ 00:04:42.156 00:04:42.156 00:04:42.156 Suite: memory 00:04:42.156 Test: test ... 00:04:42.156 register 0x200000200000 2097152 00:04:42.156 malloc 3145728 00:04:42.156 register 0x200000400000 4194304 00:04:42.156 buf 0x200000500000 len 3145728 PASSED 00:04:42.156 malloc 64 00:04:42.156 buf 0x2000004fff40 len 64 PASSED 00:04:42.156 malloc 4194304 00:04:42.156 register 0x200000800000 6291456 00:04:42.156 buf 0x200000a00000 len 4194304 PASSED 00:04:42.156 free 0x200000500000 3145728 00:04:42.156 free 0x2000004fff40 64 00:04:42.156 unregister 0x200000400000 4194304 PASSED 00:04:42.156 free 0x200000a00000 4194304 00:04:42.156 unregister 0x200000800000 6291456 PASSED 00:04:42.156 malloc 8388608 00:04:42.156 register 0x200000400000 10485760 00:04:42.156 buf 0x200000600000 len 8388608 PASSED 00:04:42.156 free 0x200000600000 8388608 00:04:42.156 unregister 0x200000400000 10485760 PASSED 00:04:42.156 passed 00:04:42.156 00:04:42.156 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.156 suites 1 1 n/a 0 0 00:04:42.156 tests 1 1 1 0 0 00:04:42.156 asserts 15 15 15 0 n/a 00:04:42.156 00:04:42.156 Elapsed time = 0.005 seconds 00:04:42.156 00:04:42.156 real 0m0.053s 00:04:42.156 user 0m0.018s 00:04:42.156 sys 0m0.035s 00:04:42.156 14:32:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.156 14:32:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.156 ************************************ 00:04:42.156 END TEST env_mem_callbacks 00:04:42.156 ************************************ 00:04:42.156 14:32:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.156 00:04:42.156 real 0m5.999s 00:04:42.156 user 0m4.239s 00:04:42.156 sys 0m0.823s 00:04:42.156 14:32:02 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.416 14:32:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 ************************************ 00:04:42.416 END TEST env 00:04:42.416 ************************************ 00:04:42.416 14:32:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.416 14:32:02 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.416 14:32:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.416 14:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.416 14:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 ************************************ 00:04:42.416 START TEST rpc 00:04:42.416 ************************************ 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.416 * Looking for test storage... 00:04:42.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.416 14:32:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2143455 00:04:42.416 14:32:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.416 14:32:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.416 14:32:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2143455 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@829 -- # '[' -z 2143455 ']' 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.416 14:32:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 [2024-07-25 14:32:02.641283] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:42.416 [2024-07-25 14:32:02.641330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143455 ] 00:04:42.416 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.416 [2024-07-25 14:32:02.694091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.676 [2024-07-25 14:32:02.777346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.676 [2024-07-25 14:32:02.777381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2143455' to capture a snapshot of events at runtime. 00:04:42.676 [2024-07-25 14:32:02.777388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.676 [2024-07-25 14:32:02.777394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.676 [2024-07-25 14:32:02.777400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2143455 for offline analysis/debug. 00:04:42.676 [2024-07-25 14:32:02.777417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.246 14:32:03 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.246 14:32:03 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.246 14:32:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.246 14:32:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.246 14:32:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.246 14:32:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.246 14:32:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.246 14:32:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.246 14:32:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 ************************************ 00:04:43.246 START TEST rpc_integrity 00:04:43.246 ************************************ 00:04:43.246 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:43.246 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.246 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.246 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.247 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.247 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.247 { 00:04:43.247 "name": "Malloc0", 00:04:43.247 "aliases": [ 00:04:43.247 "79c846c3-159d-4da7-ae14-2330704ec7c1" 00:04:43.247 ], 00:04:43.247 "product_name": "Malloc disk", 00:04:43.247 "block_size": 512, 00:04:43.247 "num_blocks": 16384, 00:04:43.247 "uuid": "79c846c3-159d-4da7-ae14-2330704ec7c1", 00:04:43.247 "assigned_rate_limits": { 00:04:43.247 "rw_ios_per_sec": 0, 00:04:43.247 "rw_mbytes_per_sec": 0, 00:04:43.247 "r_mbytes_per_sec": 0, 00:04:43.247 "w_mbytes_per_sec": 0 00:04:43.247 }, 00:04:43.247 "claimed": false, 00:04:43.247 "zoned": false, 00:04:43.247 "supported_io_types": { 00:04:43.247 "read": true, 00:04:43.247 "write": true, 00:04:43.247 "unmap": true, 00:04:43.247 "flush": true, 00:04:43.247 "reset": true, 00:04:43.247 "nvme_admin": false, 00:04:43.247 "nvme_io": false, 00:04:43.247 "nvme_io_md": false, 00:04:43.247 "write_zeroes": true, 00:04:43.247 "zcopy": true, 00:04:43.247 "get_zone_info": false, 00:04:43.247 "zone_management": false, 00:04:43.247 "zone_append": false, 00:04:43.247 "compare": false, 00:04:43.247 "compare_and_write": false, 00:04:43.247 "abort": true, 00:04:43.247 "seek_hole": false, 00:04:43.247 "seek_data": false, 00:04:43.247 "copy": true, 00:04:43.247 "nvme_iov_md": false 00:04:43.247 }, 00:04:43.247 "memory_domains": [ 00:04:43.247 { 00:04:43.247 "dma_device_id": "system", 00:04:43.247 "dma_device_type": 1 00:04:43.247 }, 00:04:43.247 { 00:04:43.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.248 "dma_device_type": 2 00:04:43.248 } 00:04:43.248 ], 00:04:43.248 "driver_specific": {} 00:04:43.248 } 00:04:43.248 ]' 00:04:43.248 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.509 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.509 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.509 [2024-07-25 14:32:03.578915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.509 [2024-07-25 14:32:03.578943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.509 [2024-07-25 14:32:03.578955] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18be2d0 00:04:43.509 [2024-07-25 14:32:03.578962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.509 [2024-07-25 14:32:03.580005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.509 [2024-07-25 14:32:03.580027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.509 Passthru0 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.509 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.509 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.509 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.509 { 00:04:43.509 "name": "Malloc0", 00:04:43.509 "aliases": [ 00:04:43.509 "79c846c3-159d-4da7-ae14-2330704ec7c1" 00:04:43.509 ], 00:04:43.509 "product_name": "Malloc disk", 00:04:43.509 "block_size": 512, 00:04:43.509 "num_blocks": 16384, 00:04:43.509 "uuid": "79c846c3-159d-4da7-ae14-2330704ec7c1", 00:04:43.509 "assigned_rate_limits": { 00:04:43.509 "rw_ios_per_sec": 0, 00:04:43.509 "rw_mbytes_per_sec": 0, 00:04:43.509 "r_mbytes_per_sec": 0, 00:04:43.509 "w_mbytes_per_sec": 0 00:04:43.509 }, 00:04:43.509 "claimed": true, 00:04:43.509 "claim_type": "exclusive_write", 00:04:43.509 "zoned": false, 00:04:43.509 "supported_io_types": { 00:04:43.509 "read": true, 00:04:43.509 "write": true, 00:04:43.509 "unmap": true, 00:04:43.509 "flush": true, 00:04:43.509 "reset": true, 00:04:43.509 "nvme_admin": false, 00:04:43.509 "nvme_io": false, 00:04:43.509 "nvme_io_md": false, 00:04:43.509 "write_zeroes": true, 00:04:43.509 "zcopy": true, 00:04:43.509 "get_zone_info": false, 00:04:43.509 "zone_management": false, 00:04:43.509 "zone_append": false, 00:04:43.509 "compare": false, 00:04:43.509 "compare_and_write": false, 00:04:43.509 "abort": true, 00:04:43.509 "seek_hole": false, 00:04:43.509 "seek_data": false, 00:04:43.509 "copy": true, 00:04:43.509 "nvme_iov_md": false 00:04:43.509 }, 00:04:43.509 "memory_domains": [ 00:04:43.509 { 00:04:43.509 "dma_device_id": "system", 00:04:43.509 "dma_device_type": 1 00:04:43.509 }, 00:04:43.509 { 00:04:43.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.509 "dma_device_type": 2 00:04:43.509 } 00:04:43.509 ], 00:04:43.509 "driver_specific": {} 00:04:43.509 }, 00:04:43.509 { 00:04:43.509 "name": "Passthru0", 00:04:43.509 "aliases": [ 00:04:43.509 "a3cc76a0-87b9-54bc-b6d7-04c2c8229b2b" 00:04:43.509 ], 00:04:43.509 "product_name": "passthru", 00:04:43.509 "block_size": 512, 00:04:43.509 "num_blocks": 16384, 00:04:43.509 "uuid": "a3cc76a0-87b9-54bc-b6d7-04c2c8229b2b", 00:04:43.509 "assigned_rate_limits": { 00:04:43.509 "rw_ios_per_sec": 0, 00:04:43.509 "rw_mbytes_per_sec": 0, 00:04:43.509 "r_mbytes_per_sec": 0, 00:04:43.509 "w_mbytes_per_sec": 0 00:04:43.509 }, 00:04:43.509 "claimed": false, 00:04:43.509 "zoned": false, 00:04:43.509 "supported_io_types": { 00:04:43.509 "read": true, 00:04:43.509 "write": true, 00:04:43.509 "unmap": true, 00:04:43.509 "flush": true, 00:04:43.509 "reset": true, 00:04:43.509 "nvme_admin": false, 00:04:43.509 "nvme_io": false, 00:04:43.509 "nvme_io_md": false, 00:04:43.509 "write_zeroes": true, 00:04:43.510 "zcopy": true, 00:04:43.510 "get_zone_info": false, 00:04:43.510 "zone_management": false, 00:04:43.510 "zone_append": false, 00:04:43.510 "compare": false, 00:04:43.510 "compare_and_write": false, 00:04:43.510 "abort": true, 00:04:43.510 "seek_hole": false, 00:04:43.510 "seek_data": false, 00:04:43.510 "copy": true, 00:04:43.510 "nvme_iov_md": false 00:04:43.510 }, 00:04:43.510 "memory_domains": [ 00:04:43.510 { 00:04:43.510 "dma_device_id": "system", 00:04:43.510 "dma_device_type": 1 00:04:43.510 }, 00:04:43.510 { 00:04:43.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.510 "dma_device_type": 2 00:04:43.510 } 00:04:43.510 ], 00:04:43.510 "driver_specific": { 00:04:43.510 "passthru": { 00:04:43.510 "name": "Passthru0", 00:04:43.510 "base_bdev_name": "Malloc0" 00:04:43.510 } 00:04:43.510 } 00:04:43.510 } 00:04:43.510 ]' 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.510 14:32:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.510 00:04:43.510 real 0m0.248s 00:04:43.510 user 0m0.147s 00:04:43.510 sys 0m0.032s 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 ************************************ 00:04:43.510 END TEST rpc_integrity 00:04:43.510 ************************************ 00:04:43.510 14:32:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.510 14:32:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.510 14:32:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.510 14:32:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.510 14:32:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 ************************************ 00:04:43.510 START TEST rpc_plugins 00:04:43.510 ************************************ 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:43.510 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.510 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.510 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.510 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.510 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.510 { 00:04:43.510 "name": "Malloc1", 00:04:43.510 "aliases": [ 00:04:43.510 "e345a772-7283-449f-b2be-5033b6758785" 00:04:43.510 ], 00:04:43.510 "product_name": "Malloc disk", 00:04:43.510 "block_size": 4096, 00:04:43.510 "num_blocks": 256, 00:04:43.510 "uuid": "e345a772-7283-449f-b2be-5033b6758785", 00:04:43.510 "assigned_rate_limits": { 00:04:43.510 "rw_ios_per_sec": 0, 00:04:43.510 "rw_mbytes_per_sec": 0, 00:04:43.510 "r_mbytes_per_sec": 0, 00:04:43.510 "w_mbytes_per_sec": 0 00:04:43.510 }, 00:04:43.510 "claimed": false, 00:04:43.510 "zoned": false, 00:04:43.510 "supported_io_types": { 00:04:43.510 "read": true, 00:04:43.510 "write": true, 00:04:43.510 "unmap": true, 00:04:43.510 "flush": true, 00:04:43.510 "reset": true, 00:04:43.510 "nvme_admin": false, 00:04:43.510 "nvme_io": false, 00:04:43.510 "nvme_io_md": false, 00:04:43.510 "write_zeroes": true, 00:04:43.510 "zcopy": true, 00:04:43.510 "get_zone_info": false, 00:04:43.510 "zone_management": false, 00:04:43.510 "zone_append": false, 00:04:43.510 "compare": false, 00:04:43.510 "compare_and_write": false, 00:04:43.510 "abort": true, 00:04:43.510 "seek_hole": false, 00:04:43.510 "seek_data": false, 00:04:43.510 "copy": true, 00:04:43.510 "nvme_iov_md": false 00:04:43.510 }, 00:04:43.510 "memory_domains": [ 00:04:43.510 { 00:04:43.510 "dma_device_id": "system", 00:04:43.510 "dma_device_type": 1 00:04:43.510 }, 00:04:43.510 { 00:04:43.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.510 "dma_device_type": 2 00:04:43.510 } 00:04:43.510 ], 00:04:43.510 "driver_specific": {} 00:04:43.510 } 00:04:43.510 ]' 00:04:43.510 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.770 14:32:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.770 00:04:43.770 real 0m0.131s 00:04:43.770 user 0m0.085s 00:04:43.770 sys 0m0.011s 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.770 14:32:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 ************************************ 00:04:43.770 END TEST rpc_plugins 00:04:43.770 ************************************ 00:04:43.770 14:32:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.770 14:32:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.770 14:32:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.770 14:32:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.770 14:32:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 ************************************ 00:04:43.770 START TEST rpc_trace_cmd_test 00:04:43.770 ************************************ 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.770 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2143455", 00:04:43.770 "tpoint_group_mask": "0x8", 00:04:43.770 "iscsi_conn": { 00:04:43.770 "mask": "0x2", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "scsi": { 00:04:43.770 "mask": "0x4", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "bdev": { 00:04:43.770 "mask": "0x8", 00:04:43.770 "tpoint_mask": "0xffffffffffffffff" 00:04:43.770 }, 00:04:43.770 "nvmf_rdma": { 00:04:43.770 "mask": "0x10", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "nvmf_tcp": { 00:04:43.770 "mask": "0x20", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "ftl": { 00:04:43.770 "mask": "0x40", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "blobfs": { 00:04:43.770 "mask": "0x80", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "dsa": { 00:04:43.770 "mask": "0x200", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "thread": { 00:04:43.770 "mask": "0x400", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "nvme_pcie": { 00:04:43.770 "mask": "0x800", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "iaa": { 00:04:43.770 "mask": "0x1000", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "nvme_tcp": { 00:04:43.770 "mask": "0x2000", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "bdev_nvme": { 00:04:43.770 "mask": "0x4000", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 }, 00:04:43.770 "sock": { 00:04:43.770 "mask": "0x8000", 00:04:43.770 "tpoint_mask": "0x0" 00:04:43.770 } 00:04:43.770 }' 00:04:43.770 14:32:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.770 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:43.770 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.770 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.031 00:04:44.031 real 0m0.190s 00:04:44.031 user 0m0.155s 00:04:44.031 sys 0m0.028s 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.031 14:32:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.031 ************************************ 00:04:44.031 END TEST rpc_trace_cmd_test 00:04:44.031 ************************************ 00:04:44.031 14:32:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.031 14:32:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.031 14:32:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.031 14:32:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.031 14:32:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.031 14:32:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.031 14:32:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.031 ************************************ 00:04:44.031 START TEST rpc_daemon_integrity 00:04:44.031 ************************************ 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.031 { 00:04:44.031 "name": "Malloc2", 00:04:44.031 "aliases": [ 00:04:44.031 "c4c13147-2488-4c84-be62-fe8bf5910f15" 00:04:44.031 ], 00:04:44.031 "product_name": "Malloc disk", 00:04:44.031 "block_size": 512, 00:04:44.031 "num_blocks": 16384, 00:04:44.031 "uuid": "c4c13147-2488-4c84-be62-fe8bf5910f15", 00:04:44.031 "assigned_rate_limits": { 00:04:44.031 "rw_ios_per_sec": 0, 00:04:44.031 "rw_mbytes_per_sec": 0, 00:04:44.031 "r_mbytes_per_sec": 0, 00:04:44.031 "w_mbytes_per_sec": 0 00:04:44.031 }, 00:04:44.031 "claimed": false, 00:04:44.031 "zoned": false, 00:04:44.031 "supported_io_types": { 00:04:44.031 "read": true, 00:04:44.031 "write": true, 00:04:44.031 "unmap": true, 00:04:44.031 "flush": true, 00:04:44.031 "reset": true, 00:04:44.031 "nvme_admin": false, 00:04:44.031 "nvme_io": false, 00:04:44.031 "nvme_io_md": false, 00:04:44.031 "write_zeroes": true, 00:04:44.031 "zcopy": true, 00:04:44.031 "get_zone_info": false, 00:04:44.031 "zone_management": false, 00:04:44.031 "zone_append": false, 00:04:44.031 "compare": false, 00:04:44.031 "compare_and_write": false, 00:04:44.031 "abort": true, 00:04:44.031 "seek_hole": false, 00:04:44.031 "seek_data": false, 00:04:44.031 "copy": true, 00:04:44.031 "nvme_iov_md": false 00:04:44.031 }, 00:04:44.031 "memory_domains": [ 00:04:44.031 { 00:04:44.031 "dma_device_id": "system", 00:04:44.031 "dma_device_type": 1 00:04:44.031 }, 00:04:44.031 { 00:04:44.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.031 "dma_device_type": 2 00:04:44.031 } 00:04:44.031 ], 00:04:44.031 "driver_specific": {} 00:04:44.031 } 00:04:44.031 ]' 00:04:44.031 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 [2024-07-25 14:32:04.328976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.292 [2024-07-25 14:32:04.329005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.292 [2024-07-25 14:32:04.329017] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a55ac0 00:04:44.292 [2024-07-25 14:32:04.329023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.292 [2024-07-25 14:32:04.329970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.292 [2024-07-25 14:32:04.329992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.292 Passthru0 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.292 { 00:04:44.292 "name": "Malloc2", 00:04:44.292 "aliases": [ 00:04:44.292 "c4c13147-2488-4c84-be62-fe8bf5910f15" 00:04:44.292 ], 00:04:44.292 "product_name": "Malloc disk", 00:04:44.292 "block_size": 512, 00:04:44.292 "num_blocks": 16384, 00:04:44.292 "uuid": "c4c13147-2488-4c84-be62-fe8bf5910f15", 00:04:44.292 "assigned_rate_limits": { 00:04:44.292 "rw_ios_per_sec": 0, 00:04:44.292 "rw_mbytes_per_sec": 0, 00:04:44.292 "r_mbytes_per_sec": 0, 00:04:44.292 "w_mbytes_per_sec": 0 00:04:44.292 }, 00:04:44.292 "claimed": true, 00:04:44.292 "claim_type": "exclusive_write", 00:04:44.292 "zoned": false, 00:04:44.292 "supported_io_types": { 00:04:44.292 "read": true, 00:04:44.292 "write": true, 00:04:44.292 "unmap": true, 00:04:44.292 "flush": true, 00:04:44.292 "reset": true, 00:04:44.292 "nvme_admin": false, 00:04:44.292 "nvme_io": false, 00:04:44.292 "nvme_io_md": false, 00:04:44.292 "write_zeroes": true, 00:04:44.292 "zcopy": true, 00:04:44.292 "get_zone_info": false, 00:04:44.292 "zone_management": false, 00:04:44.292 "zone_append": false, 00:04:44.292 "compare": false, 00:04:44.292 "compare_and_write": false, 00:04:44.292 "abort": true, 00:04:44.292 "seek_hole": false, 00:04:44.292 "seek_data": false, 00:04:44.292 "copy": true, 00:04:44.292 "nvme_iov_md": false 00:04:44.292 }, 00:04:44.292 "memory_domains": [ 00:04:44.292 { 00:04:44.292 "dma_device_id": "system", 00:04:44.292 "dma_device_type": 1 00:04:44.292 }, 00:04:44.292 { 00:04:44.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.292 "dma_device_type": 2 00:04:44.292 } 00:04:44.292 ], 00:04:44.292 "driver_specific": {} 00:04:44.292 }, 00:04:44.292 { 00:04:44.292 "name": "Passthru0", 00:04:44.292 "aliases": [ 00:04:44.292 "dc78aecc-75f0-5471-8223-b27ee93c3a0c" 00:04:44.292 ], 00:04:44.292 "product_name": "passthru", 00:04:44.292 "block_size": 512, 00:04:44.292 "num_blocks": 16384, 00:04:44.292 "uuid": "dc78aecc-75f0-5471-8223-b27ee93c3a0c", 00:04:44.292 "assigned_rate_limits": { 00:04:44.292 "rw_ios_per_sec": 0, 00:04:44.292 "rw_mbytes_per_sec": 0, 00:04:44.292 "r_mbytes_per_sec": 0, 00:04:44.292 "w_mbytes_per_sec": 0 00:04:44.292 }, 00:04:44.292 "claimed": false, 00:04:44.292 "zoned": false, 00:04:44.292 "supported_io_types": { 00:04:44.292 "read": true, 00:04:44.292 "write": true, 00:04:44.292 "unmap": true, 00:04:44.292 "flush": true, 00:04:44.292 "reset": true, 00:04:44.292 "nvme_admin": false, 00:04:44.292 "nvme_io": false, 00:04:44.292 "nvme_io_md": false, 00:04:44.292 "write_zeroes": true, 00:04:44.292 "zcopy": true, 00:04:44.292 "get_zone_info": false, 00:04:44.292 "zone_management": false, 00:04:44.292 "zone_append": false, 00:04:44.292 "compare": false, 00:04:44.292 "compare_and_write": false, 00:04:44.292 "abort": true, 00:04:44.292 "seek_hole": false, 00:04:44.292 "seek_data": false, 00:04:44.292 "copy": true, 00:04:44.292 "nvme_iov_md": false 00:04:44.292 }, 00:04:44.292 "memory_domains": [ 00:04:44.292 { 00:04:44.292 "dma_device_id": "system", 00:04:44.292 "dma_device_type": 1 00:04:44.292 }, 00:04:44.292 { 00:04:44.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.292 "dma_device_type": 2 00:04:44.292 } 00:04:44.292 ], 00:04:44.292 "driver_specific": { 00:04:44.292 "passthru": { 00:04:44.292 "name": "Passthru0", 00:04:44.292 "base_bdev_name": "Malloc2" 00:04:44.292 } 00:04:44.292 } 00:04:44.292 } 00:04:44.292 ]' 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.292 00:04:44.292 real 0m0.251s 00:04:44.292 user 0m0.166s 00:04:44.292 sys 0m0.025s 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.292 14:32:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.292 ************************************ 00:04:44.292 END TEST rpc_daemon_integrity 00:04:44.292 ************************************ 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.292 14:32:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.292 14:32:04 rpc -- rpc/rpc.sh@84 -- # killprocess 2143455 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@948 -- # '[' -z 2143455 ']' 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@952 -- # kill -0 2143455 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2143455 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.292 14:32:04 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.293 14:32:04 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2143455' 00:04:44.293 killing process with pid 2143455 00:04:44.293 14:32:04 rpc -- common/autotest_common.sh@967 -- # kill 2143455 00:04:44.293 14:32:04 rpc -- common/autotest_common.sh@972 -- # wait 2143455 00:04:44.863 00:04:44.863 real 0m2.330s 00:04:44.863 user 0m2.955s 00:04:44.863 sys 0m0.632s 00:04:44.863 14:32:04 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.863 14:32:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.863 ************************************ 00:04:44.863 END TEST rpc 00:04:44.863 ************************************ 00:04:44.863 14:32:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.863 14:32:04 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.863 14:32:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.863 14:32:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.863 14:32:04 -- common/autotest_common.sh@10 -- # set +x 00:04:44.863 ************************************ 00:04:44.863 START TEST skip_rpc 00:04:44.863 ************************************ 00:04:44.863 14:32:04 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.863 * Looking for test storage... 00:04:44.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.863 14:32:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.863 14:32:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:44.863 14:32:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:44.863 14:32:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.863 14:32:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.863 14:32:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.863 ************************************ 00:04:44.863 START TEST skip_rpc 00:04:44.863 ************************************ 00:04:44.863 14:32:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:44.863 14:32:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2144092 00:04:44.863 14:32:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.863 14:32:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:44.863 14:32:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:44.863 [2024-07-25 14:32:05.085747] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:44.863 [2024-07-25 14:32:05.085795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144092 ] 00:04:44.863 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.863 [2024-07-25 14:32:05.139281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.123 [2024-07-25 14:32:05.214678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2144092 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2144092 ']' 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2144092 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2144092 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2144092' 00:04:50.402 killing process with pid 2144092 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2144092 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2144092 00:04:50.402 00:04:50.402 real 0m5.373s 00:04:50.402 user 0m5.155s 00:04:50.402 sys 0m0.251s 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.402 14:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.402 ************************************ 00:04:50.402 END TEST skip_rpc 00:04:50.402 ************************************ 00:04:50.402 14:32:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.402 14:32:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.402 14:32:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.402 14:32:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.402 14:32:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.402 ************************************ 00:04:50.402 START TEST skip_rpc_with_json 00:04:50.402 ************************************ 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2145033 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2145033 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2145033 ']' 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.402 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.403 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.403 14:32:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 [2024-07-25 14:32:10.529507] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:50.403 [2024-07-25 14:32:10.529555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145033 ] 00:04:50.403 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.403 [2024-07-25 14:32:10.582512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.403 [2024-07-25 14:32:10.662617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 [2024-07-25 14:32:11.349295] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.343 request: 00:04:51.343 { 00:04:51.343 "trtype": "tcp", 00:04:51.343 "method": "nvmf_get_transports", 00:04:51.343 "req_id": 1 00:04:51.343 } 00:04:51.343 Got JSON-RPC error response 00:04:51.343 response: 00:04:51.343 { 00:04:51.343 "code": -19, 00:04:51.343 "message": "No such device" 00:04:51.343 } 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 [2024-07-25 14:32:11.357392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.343 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.343 { 00:04:51.343 "subsystems": [ 00:04:51.343 { 00:04:51.343 "subsystem": "vfio_user_target", 00:04:51.343 "config": null 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "keyring", 00:04:51.343 "config": [] 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "iobuf", 00:04:51.343 "config": [ 00:04:51.343 { 00:04:51.343 "method": "iobuf_set_options", 00:04:51.343 "params": { 00:04:51.343 "small_pool_count": 8192, 00:04:51.343 "large_pool_count": 1024, 00:04:51.343 "small_bufsize": 8192, 00:04:51.343 "large_bufsize": 135168 00:04:51.343 } 00:04:51.343 } 00:04:51.343 ] 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "sock", 00:04:51.343 "config": [ 00:04:51.343 { 00:04:51.343 "method": "sock_set_default_impl", 00:04:51.343 "params": { 00:04:51.343 "impl_name": "posix" 00:04:51.343 } 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "method": "sock_impl_set_options", 00:04:51.343 "params": { 00:04:51.343 "impl_name": "ssl", 00:04:51.343 "recv_buf_size": 4096, 00:04:51.343 "send_buf_size": 4096, 00:04:51.343 "enable_recv_pipe": true, 00:04:51.343 "enable_quickack": false, 00:04:51.343 "enable_placement_id": 0, 00:04:51.343 "enable_zerocopy_send_server": true, 00:04:51.343 "enable_zerocopy_send_client": false, 00:04:51.343 "zerocopy_threshold": 0, 00:04:51.343 "tls_version": 0, 00:04:51.343 "enable_ktls": false 00:04:51.343 } 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "method": "sock_impl_set_options", 00:04:51.343 "params": { 00:04:51.343 "impl_name": "posix", 00:04:51.343 "recv_buf_size": 2097152, 00:04:51.343 "send_buf_size": 2097152, 00:04:51.343 "enable_recv_pipe": true, 00:04:51.343 "enable_quickack": false, 00:04:51.343 "enable_placement_id": 0, 00:04:51.343 "enable_zerocopy_send_server": true, 00:04:51.343 "enable_zerocopy_send_client": false, 00:04:51.343 "zerocopy_threshold": 0, 00:04:51.343 "tls_version": 0, 00:04:51.343 "enable_ktls": false 00:04:51.343 } 00:04:51.343 } 00:04:51.343 ] 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "vmd", 00:04:51.343 "config": [] 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "accel", 00:04:51.343 "config": [ 00:04:51.343 { 00:04:51.343 "method": "accel_set_options", 00:04:51.343 "params": { 00:04:51.343 "small_cache_size": 128, 00:04:51.344 "large_cache_size": 16, 00:04:51.344 "task_count": 2048, 00:04:51.344 "sequence_count": 2048, 00:04:51.344 "buf_count": 2048 00:04:51.344 } 00:04:51.344 } 00:04:51.344 ] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "bdev", 00:04:51.344 "config": [ 00:04:51.344 { 00:04:51.344 "method": "bdev_set_options", 00:04:51.344 "params": { 00:04:51.344 "bdev_io_pool_size": 65535, 00:04:51.344 "bdev_io_cache_size": 256, 00:04:51.344 "bdev_auto_examine": true, 00:04:51.344 "iobuf_small_cache_size": 128, 00:04:51.344 "iobuf_large_cache_size": 16 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "bdev_raid_set_options", 00:04:51.344 "params": { 00:04:51.344 "process_window_size_kb": 1024 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "bdev_iscsi_set_options", 00:04:51.344 "params": { 00:04:51.344 "timeout_sec": 30 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "bdev_nvme_set_options", 00:04:51.344 "params": { 00:04:51.344 "action_on_timeout": "none", 00:04:51.344 "timeout_us": 0, 00:04:51.344 "timeout_admin_us": 0, 00:04:51.344 "keep_alive_timeout_ms": 10000, 00:04:51.344 "arbitration_burst": 0, 00:04:51.344 "low_priority_weight": 0, 00:04:51.344 "medium_priority_weight": 0, 00:04:51.344 "high_priority_weight": 0, 00:04:51.344 "nvme_adminq_poll_period_us": 10000, 00:04:51.344 "nvme_ioq_poll_period_us": 0, 00:04:51.344 "io_queue_requests": 0, 00:04:51.344 "delay_cmd_submit": true, 00:04:51.344 "transport_retry_count": 4, 00:04:51.344 "bdev_retry_count": 3, 00:04:51.344 "transport_ack_timeout": 0, 00:04:51.344 "ctrlr_loss_timeout_sec": 0, 00:04:51.344 "reconnect_delay_sec": 0, 00:04:51.344 "fast_io_fail_timeout_sec": 0, 00:04:51.344 "disable_auto_failback": false, 00:04:51.344 "generate_uuids": false, 00:04:51.344 "transport_tos": 0, 00:04:51.344 "nvme_error_stat": false, 00:04:51.344 "rdma_srq_size": 0, 00:04:51.344 "io_path_stat": false, 00:04:51.344 "allow_accel_sequence": false, 00:04:51.344 "rdma_max_cq_size": 0, 00:04:51.344 "rdma_cm_event_timeout_ms": 0, 00:04:51.344 "dhchap_digests": [ 00:04:51.344 "sha256", 00:04:51.344 "sha384", 00:04:51.344 "sha512" 00:04:51.344 ], 00:04:51.344 "dhchap_dhgroups": [ 00:04:51.344 "null", 00:04:51.344 "ffdhe2048", 00:04:51.344 "ffdhe3072", 00:04:51.344 "ffdhe4096", 00:04:51.344 "ffdhe6144", 00:04:51.344 "ffdhe8192" 00:04:51.344 ] 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "bdev_nvme_set_hotplug", 00:04:51.344 "params": { 00:04:51.344 "period_us": 100000, 00:04:51.344 "enable": false 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "bdev_wait_for_examine" 00:04:51.344 } 00:04:51.344 ] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "scsi", 00:04:51.344 "config": null 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "scheduler", 00:04:51.344 "config": [ 00:04:51.344 { 00:04:51.344 "method": "framework_set_scheduler", 00:04:51.344 "params": { 00:04:51.344 "name": "static" 00:04:51.344 } 00:04:51.344 } 00:04:51.344 ] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "vhost_scsi", 00:04:51.344 "config": [] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "vhost_blk", 00:04:51.344 "config": [] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "ublk", 00:04:51.344 "config": [] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "nbd", 00:04:51.344 "config": [] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "nvmf", 00:04:51.344 "config": [ 00:04:51.344 { 00:04:51.344 "method": "nvmf_set_config", 00:04:51.344 "params": { 00:04:51.344 "discovery_filter": "match_any", 00:04:51.344 "admin_cmd_passthru": { 00:04:51.344 "identify_ctrlr": false 00:04:51.344 } 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "nvmf_set_max_subsystems", 00:04:51.344 "params": { 00:04:51.344 "max_subsystems": 1024 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "nvmf_set_crdt", 00:04:51.344 "params": { 00:04:51.344 "crdt1": 0, 00:04:51.344 "crdt2": 0, 00:04:51.344 "crdt3": 0 00:04:51.344 } 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "method": "nvmf_create_transport", 00:04:51.344 "params": { 00:04:51.344 "trtype": "TCP", 00:04:51.344 "max_queue_depth": 128, 00:04:51.344 "max_io_qpairs_per_ctrlr": 127, 00:04:51.344 "in_capsule_data_size": 4096, 00:04:51.344 "max_io_size": 131072, 00:04:51.344 "io_unit_size": 131072, 00:04:51.344 "max_aq_depth": 128, 00:04:51.344 "num_shared_buffers": 511, 00:04:51.344 "buf_cache_size": 4294967295, 00:04:51.344 "dif_insert_or_strip": false, 00:04:51.344 "zcopy": false, 00:04:51.344 "c2h_success": true, 00:04:51.344 "sock_priority": 0, 00:04:51.344 "abort_timeout_sec": 1, 00:04:51.344 "ack_timeout": 0, 00:04:51.344 "data_wr_pool_size": 0 00:04:51.344 } 00:04:51.344 } 00:04:51.344 ] 00:04:51.344 }, 00:04:51.344 { 00:04:51.344 "subsystem": "iscsi", 00:04:51.344 "config": [ 00:04:51.344 { 00:04:51.344 "method": "iscsi_set_options", 00:04:51.344 "params": { 00:04:51.344 "node_base": "iqn.2016-06.io.spdk", 00:04:51.344 "max_sessions": 128, 00:04:51.344 "max_connections_per_session": 2, 00:04:51.344 "max_queue_depth": 64, 00:04:51.344 "default_time2wait": 2, 00:04:51.344 "default_time2retain": 20, 00:04:51.344 "first_burst_length": 8192, 00:04:51.344 "immediate_data": true, 00:04:51.344 "allow_duplicated_isid": false, 00:04:51.344 "error_recovery_level": 0, 00:04:51.344 "nop_timeout": 60, 00:04:51.344 "nop_in_interval": 30, 00:04:51.345 "disable_chap": false, 00:04:51.345 "require_chap": false, 00:04:51.345 "mutual_chap": false, 00:04:51.345 "chap_group": 0, 00:04:51.345 "max_large_datain_per_connection": 64, 00:04:51.345 "max_r2t_per_connection": 4, 00:04:51.345 "pdu_pool_size": 36864, 00:04:51.345 "immediate_data_pool_size": 16384, 00:04:51.345 "data_out_pool_size": 2048 00:04:51.345 } 00:04:51.345 } 00:04:51.345 ] 00:04:51.345 } 00:04:51.345 ] 00:04:51.345 } 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2145033 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2145033 ']' 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2145033 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2145033 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2145033' 00:04:51.345 killing process with pid 2145033 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2145033 00:04:51.345 14:32:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2145033 00:04:51.605 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2145275 00:04:51.605 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.605 14:32:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2145275 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2145275 ']' 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2145275 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2145275 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2145275' 00:04:56.942 killing process with pid 2145275 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2145275 00:04:56.942 14:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2145275 00:04:56.942 14:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.942 14:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.202 00:04:57.202 real 0m6.760s 00:04:57.202 user 0m6.626s 00:04:57.202 sys 0m0.578s 00:04:57.202 14:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.202 14:32:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.202 ************************************ 00:04:57.202 END TEST skip_rpc_with_json 00:04:57.202 ************************************ 00:04:57.202 14:32:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.202 14:32:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.202 14:32:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.202 14:32:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.202 14:32:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.202 ************************************ 00:04:57.202 START TEST skip_rpc_with_delay 00:04:57.202 ************************************ 00:04:57.202 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:57.202 14:32:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.202 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.203 [2024-07-25 14:32:17.355903] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.203 [2024-07-25 14:32:17.355962] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.203 00:04:57.203 real 0m0.064s 00:04:57.203 user 0m0.041s 00:04:57.203 sys 0m0.023s 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.203 14:32:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.203 ************************************ 00:04:57.203 END TEST skip_rpc_with_delay 00:04:57.203 ************************************ 00:04:57.203 14:32:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.203 14:32:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.203 14:32:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.203 14:32:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.203 14:32:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.203 14:32:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.203 14:32:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.203 ************************************ 00:04:57.203 START TEST exit_on_failed_rpc_init 00:04:57.203 ************************************ 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2146249 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2146249 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2146249 ']' 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.203 14:32:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.203 [2024-07-25 14:32:17.487821] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:57.203 [2024-07-25 14:32:17.487862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146249 ] 00:04:57.462 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.462 [2024-07-25 14:32:17.540411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.462 [2024-07-25 14:32:17.620292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.032 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.290 [2024-07-25 14:32:18.346390] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:58.290 [2024-07-25 14:32:18.346437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146396 ] 00:04:58.290 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.290 [2024-07-25 14:32:18.398914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.290 [2024-07-25 14:32:18.471518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.290 [2024-07-25 14:32:18.471583] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.290 [2024-07-25 14:32:18.471592] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.290 [2024-07-25 14:32:18.471598] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2146249 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2146249 ']' 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2146249 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.290 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2146249 00:04:58.549 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.549 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.549 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2146249' 00:04:58.549 killing process with pid 2146249 00:04:58.549 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2146249 00:04:58.549 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2146249 00:04:58.808 00:04:58.808 real 0m1.457s 00:04:58.808 user 0m1.698s 00:04:58.808 sys 0m0.378s 00:04:58.808 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.808 14:32:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 END TEST exit_on_failed_rpc_init 00:04:58.808 ************************************ 00:04:58.808 14:32:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.808 14:32:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.808 00:04:58.808 real 0m14.015s 00:04:58.808 user 0m13.662s 00:04:58.808 sys 0m1.475s 00:04:58.808 14:32:18 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.808 14:32:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 END TEST skip_rpc 00:04:58.808 ************************************ 00:04:58.808 14:32:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.808 14:32:18 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.808 14:32:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.808 14:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.808 14:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 START TEST rpc_client 00:04:58.808 ************************************ 00:04:58.808 14:32:19 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.808 * Looking for test storage... 00:04:58.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:58.808 14:32:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.068 OK 00:04:59.068 14:32:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.068 00:04:59.068 real 0m0.110s 00:04:59.068 user 0m0.050s 00:04:59.068 sys 0m0.068s 00:04:59.068 14:32:19 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.068 14:32:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.068 ************************************ 00:04:59.068 END TEST rpc_client 00:04:59.068 ************************************ 00:04:59.068 14:32:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.068 14:32:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.068 14:32:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.068 14:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.068 14:32:19 -- common/autotest_common.sh@10 -- # set +x 00:04:59.068 ************************************ 00:04:59.068 START TEST json_config 00:04:59.068 ************************************ 00:04:59.068 14:32:19 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.068 14:32:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.068 14:32:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.068 14:32:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.068 14:32:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.068 14:32:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.068 14:32:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.068 14:32:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.068 14:32:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.068 14:32:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@47 -- # : 0 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.068 14:32:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.069 14:32:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.069 14:32:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.069 14:32:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.069 14:32:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:59.069 INFO: JSON configuration test init 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.069 14:32:19 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:59.069 14:32:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.069 14:32:19 json_config -- json_config/common.sh@10 -- # shift 00:04:59.069 14:32:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.069 14:32:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.069 14:32:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.069 14:32:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.069 14:32:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.069 14:32:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2146607 00:04:59.069 14:32:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.069 Waiting for target to run... 00:04:59.069 14:32:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2146607 /var/tmp/spdk_tgt.sock 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 2146607 ']' 00:04:59.069 14:32:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.069 14:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.069 [2024-07-25 14:32:19.320830] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:04:59.069 [2024-07-25 14:32:19.320882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146607 ] 00:04:59.069 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.328 [2024-07-25 14:32:19.586912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.587 [2024-07-25 14:32:19.656413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:59.847 14:32:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.847 00:04:59.847 14:32:20 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:59.847 14:32:20 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.847 14:32:20 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:59.847 14:32:20 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.847 14:32:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.106 14:32:20 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:00.106 14:32:20 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:00.106 14:32:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:03.397 14:32:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@51 -- # sort 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.397 14:32:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.397 14:32:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.397 MallocForNvmf0 00:05:03.397 14:32:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.397 14:32:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.657 MallocForNvmf1 00:05:03.657 14:32:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.657 14:32:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.917 [2024-07-25 14:32:23.971532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.917 14:32:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.917 14:32:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.917 14:32:24 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.917 14:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.176 14:32:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.176 14:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.436 14:32:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.436 14:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.436 [2024-07-25 14:32:24.653663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.436 14:32:24 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:04.436 14:32:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.436 14:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.436 14:32:24 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:04.436 14:32:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.436 14:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.695 14:32:24 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:04.695 14:32:24 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.695 14:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.695 MallocBdevForConfigChangeCheck 00:05:04.695 14:32:24 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:04.695 14:32:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.695 14:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.695 14:32:24 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:04.695 14:32:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.955 14:32:25 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:05.215 INFO: shutting down applications... 00:05:05.215 14:32:25 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:05.215 14:32:25 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:05.215 14:32:25 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:05.215 14:32:25 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.596 Calling clear_iscsi_subsystem 00:05:06.596 Calling clear_nvmf_subsystem 00:05:06.596 Calling clear_nbd_subsystem 00:05:06.596 Calling clear_ublk_subsystem 00:05:06.596 Calling clear_vhost_blk_subsystem 00:05:06.596 Calling clear_vhost_scsi_subsystem 00:05:06.596 Calling clear_bdev_subsystem 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.596 14:32:26 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.856 14:32:27 json_config -- json_config/json_config.sh@349 -- # break 00:05:06.856 14:32:27 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:06.856 14:32:27 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:06.856 14:32:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:06.856 14:32:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.856 14:32:27 json_config -- json_config/common.sh@35 -- # [[ -n 2146607 ]] 00:05:06.856 14:32:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2146607 00:05:06.856 14:32:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.856 14:32:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.856 14:32:27 json_config -- json_config/common.sh@41 -- # kill -0 2146607 00:05:06.856 14:32:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.427 14:32:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.427 14:32:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.427 14:32:27 json_config -- json_config/common.sh@41 -- # kill -0 2146607 00:05:07.427 14:32:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.427 14:32:27 json_config -- json_config/common.sh@43 -- # break 00:05:07.427 14:32:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.427 14:32:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.427 SPDK target shutdown done 00:05:07.427 14:32:27 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:07.427 INFO: relaunching applications... 00:05:07.427 14:32:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.427 14:32:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.427 14:32:27 json_config -- json_config/common.sh@10 -- # shift 00:05:07.427 14:32:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.427 14:32:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.427 14:32:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.427 14:32:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.427 14:32:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.427 14:32:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2148116 00:05:07.427 14:32:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.427 Waiting for target to run... 00:05:07.427 14:32:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.427 14:32:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2148116 /var/tmp/spdk_tgt.sock 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 2148116 ']' 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.427 14:32:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.427 [2024-07-25 14:32:27.689529] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:07.427 [2024-07-25 14:32:27.689595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148116 ] 00:05:07.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.996 [2024-07-25 14:32:28.137651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.996 [2024-07-25 14:32:28.229501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.283 [2024-07-25 14:32:31.247720] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.284 [2024-07-25 14:32:31.280032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.853 14:32:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.853 14:32:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:11.853 14:32:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.853 00:05:11.853 14:32:31 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:11.853 14:32:31 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.853 INFO: Checking if target configuration is the same... 00:05:11.853 14:32:31 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.853 14:32:31 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:11.853 14:32:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.853 + '[' 2 -ne 2 ']' 00:05:11.853 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.853 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.853 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.853 +++ basename /dev/fd/62 00:05:11.853 ++ mktemp /tmp/62.XXX 00:05:11.853 + tmp_file_1=/tmp/62.wn5 00:05:11.853 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.853 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.853 + tmp_file_2=/tmp/spdk_tgt_config.json.G5W 00:05:11.853 + ret=0 00:05:11.853 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.113 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.113 + diff -u /tmp/62.wn5 /tmp/spdk_tgt_config.json.G5W 00:05:12.113 + echo 'INFO: JSON config files are the same' 00:05:12.113 INFO: JSON config files are the same 00:05:12.113 + rm /tmp/62.wn5 /tmp/spdk_tgt_config.json.G5W 00:05:12.113 + exit 0 00:05:12.113 14:32:32 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:12.113 14:32:32 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.113 INFO: changing configuration and checking if this can be detected... 00:05:12.113 14:32:32 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.113 14:32:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.113 14:32:32 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.113 14:32:32 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:12.113 14:32:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.113 + '[' 2 -ne 2 ']' 00:05:12.113 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.113 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.113 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.113 +++ basename /dev/fd/62 00:05:12.113 ++ mktemp /tmp/62.XXX 00:05:12.113 + tmp_file_1=/tmp/62.nTa 00:05:12.372 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.372 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.372 + tmp_file_2=/tmp/spdk_tgt_config.json.pm4 00:05:12.372 + ret=0 00:05:12.372 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.632 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.632 + diff -u /tmp/62.nTa /tmp/spdk_tgt_config.json.pm4 00:05:12.632 + ret=1 00:05:12.632 + echo '=== Start of file: /tmp/62.nTa ===' 00:05:12.632 + cat /tmp/62.nTa 00:05:12.632 + echo '=== End of file: /tmp/62.nTa ===' 00:05:12.632 + echo '' 00:05:12.632 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pm4 ===' 00:05:12.632 + cat /tmp/spdk_tgt_config.json.pm4 00:05:12.632 + echo '=== End of file: /tmp/spdk_tgt_config.json.pm4 ===' 00:05:12.632 + echo '' 00:05:12.632 + rm /tmp/62.nTa /tmp/spdk_tgt_config.json.pm4 00:05:12.632 + exit 1 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:12.632 INFO: configuration change detected. 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@321 -- # [[ -n 2148116 ]] 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 14:32:32 json_config -- json_config/json_config.sh@327 -- # killprocess 2148116 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@948 -- # '[' -z 2148116 ']' 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@952 -- # kill -0 2148116 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@953 -- # uname 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2148116 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2148116' 00:05:12.632 killing process with pid 2148116 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@967 -- # kill 2148116 00:05:12.632 14:32:32 json_config -- common/autotest_common.sh@972 -- # wait 2148116 00:05:14.538 14:32:34 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.538 14:32:34 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:14.538 14:32:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.538 14:32:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.538 14:32:34 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:14.538 14:32:34 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:14.538 INFO: Success 00:05:14.538 00:05:14.538 real 0m15.171s 00:05:14.538 user 0m15.907s 00:05:14.538 sys 0m1.926s 00:05:14.538 14:32:34 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.538 14:32:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.538 ************************************ 00:05:14.538 END TEST json_config 00:05:14.538 ************************************ 00:05:14.538 14:32:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.538 14:32:34 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.538 14:32:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.538 14:32:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.538 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:05:14.538 ************************************ 00:05:14.538 START TEST json_config_extra_key 00:05:14.538 ************************************ 00:05:14.538 14:32:34 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.538 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.538 14:32:34 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.538 14:32:34 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.538 14:32:34 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.538 14:32:34 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.538 14:32:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.538 14:32:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.539 14:32:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.539 14:32:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.539 14:32:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.539 14:32:34 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.539 INFO: launching applications... 00:05:14.539 14:32:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2149390 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.539 Waiting for target to run... 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2149390 /var/tmp/spdk_tgt.sock 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2149390 ']' 00:05:14.539 14:32:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.539 14:32:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.539 [2024-07-25 14:32:34.536275] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:14.539 [2024-07-25 14:32:34.536328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149390 ] 00:05:14.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.539 [2024-07-25 14:32:34.803378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.799 [2024-07-25 14:32:34.872541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.064 14:32:35 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.064 14:32:35 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:15.064 14:32:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.064 00:05:15.065 14:32:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.065 INFO: shutting down applications... 00:05:15.065 14:32:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2149390 ]] 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2149390 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2149390 00:05:15.065 14:32:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2149390 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.701 14:32:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.701 SPDK target shutdown done 00:05:15.701 14:32:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.701 Success 00:05:15.701 00:05:15.701 real 0m1.440s 00:05:15.701 user 0m1.241s 00:05:15.701 sys 0m0.353s 00:05:15.701 14:32:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.701 14:32:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.701 ************************************ 00:05:15.701 END TEST json_config_extra_key 00:05:15.701 ************************************ 00:05:15.701 14:32:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.701 14:32:35 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.701 14:32:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.701 14:32:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.701 14:32:35 -- common/autotest_common.sh@10 -- # set +x 00:05:15.701 ************************************ 00:05:15.701 START TEST alias_rpc 00:05:15.701 ************************************ 00:05:15.701 14:32:35 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.701 * Looking for test storage... 00:05:15.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.962 14:32:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.962 14:32:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.962 14:32:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2149752 00:05:15.962 14:32:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2149752 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2149752 ']' 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.962 14:32:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.962 [2024-07-25 14:32:36.033213] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:15.962 [2024-07-25 14:32:36.033265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149752 ] 00:05:15.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.962 [2024-07-25 14:32:36.087132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.962 [2024-07-25 14:32:36.167861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.902 14:32:36 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.902 14:32:36 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.902 14:32:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.902 14:32:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2149752 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2149752 ']' 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2149752 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2149752 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2149752' 00:05:16.902 killing process with pid 2149752 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@967 -- # kill 2149752 00:05:16.902 14:32:37 alias_rpc -- common/autotest_common.sh@972 -- # wait 2149752 00:05:17.162 00:05:17.162 real 0m1.505s 00:05:17.162 user 0m1.661s 00:05:17.162 sys 0m0.393s 00:05:17.162 14:32:37 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.162 14:32:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.162 ************************************ 00:05:17.162 END TEST alias_rpc 00:05:17.162 ************************************ 00:05:17.162 14:32:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.162 14:32:37 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:17.162 14:32:37 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.162 14:32:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.162 14:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.162 14:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 ************************************ 00:05:17.422 START TEST spdkcli_tcp 00:05:17.422 ************************************ 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.422 * Looking for test storage... 00:05:17.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2150157 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.422 14:32:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2150157 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2150157 ']' 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.422 14:32:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 [2024-07-25 14:32:37.607059] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:17.422 [2024-07-25 14:32:37.607113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150157 ] 00:05:17.422 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.422 [2024-07-25 14:32:37.660287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.682 [2024-07-25 14:32:37.743727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.682 [2024-07-25 14:32:37.743730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.251 14:32:38 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.251 14:32:38 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:18.251 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2150180 00:05:18.251 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.251 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.512 [ 00:05:18.512 "bdev_malloc_delete", 00:05:18.512 "bdev_malloc_create", 00:05:18.512 "bdev_null_resize", 00:05:18.512 "bdev_null_delete", 00:05:18.512 "bdev_null_create", 00:05:18.512 "bdev_nvme_cuse_unregister", 00:05:18.512 "bdev_nvme_cuse_register", 00:05:18.512 "bdev_opal_new_user", 00:05:18.512 "bdev_opal_set_lock_state", 00:05:18.512 "bdev_opal_delete", 00:05:18.512 "bdev_opal_get_info", 00:05:18.512 "bdev_opal_create", 00:05:18.512 "bdev_nvme_opal_revert", 00:05:18.512 "bdev_nvme_opal_init", 00:05:18.512 "bdev_nvme_send_cmd", 00:05:18.512 "bdev_nvme_get_path_iostat", 00:05:18.512 "bdev_nvme_get_mdns_discovery_info", 00:05:18.512 "bdev_nvme_stop_mdns_discovery", 00:05:18.512 "bdev_nvme_start_mdns_discovery", 00:05:18.512 "bdev_nvme_set_multipath_policy", 00:05:18.512 "bdev_nvme_set_preferred_path", 00:05:18.512 "bdev_nvme_get_io_paths", 00:05:18.512 "bdev_nvme_remove_error_injection", 00:05:18.512 "bdev_nvme_add_error_injection", 00:05:18.512 "bdev_nvme_get_discovery_info", 00:05:18.512 "bdev_nvme_stop_discovery", 00:05:18.512 "bdev_nvme_start_discovery", 00:05:18.512 "bdev_nvme_get_controller_health_info", 00:05:18.512 "bdev_nvme_disable_controller", 00:05:18.512 "bdev_nvme_enable_controller", 00:05:18.512 "bdev_nvme_reset_controller", 00:05:18.512 "bdev_nvme_get_transport_statistics", 00:05:18.512 "bdev_nvme_apply_firmware", 00:05:18.512 "bdev_nvme_detach_controller", 00:05:18.512 "bdev_nvme_get_controllers", 00:05:18.512 "bdev_nvme_attach_controller", 00:05:18.512 "bdev_nvme_set_hotplug", 00:05:18.512 "bdev_nvme_set_options", 00:05:18.512 "bdev_passthru_delete", 00:05:18.512 "bdev_passthru_create", 00:05:18.512 "bdev_lvol_set_parent_bdev", 00:05:18.512 "bdev_lvol_set_parent", 00:05:18.513 "bdev_lvol_check_shallow_copy", 00:05:18.513 "bdev_lvol_start_shallow_copy", 00:05:18.513 "bdev_lvol_grow_lvstore", 00:05:18.513 "bdev_lvol_get_lvols", 00:05:18.513 "bdev_lvol_get_lvstores", 00:05:18.513 "bdev_lvol_delete", 00:05:18.513 "bdev_lvol_set_read_only", 00:05:18.513 "bdev_lvol_resize", 00:05:18.513 "bdev_lvol_decouple_parent", 00:05:18.513 "bdev_lvol_inflate", 00:05:18.513 "bdev_lvol_rename", 00:05:18.513 "bdev_lvol_clone_bdev", 00:05:18.513 "bdev_lvol_clone", 00:05:18.513 "bdev_lvol_snapshot", 00:05:18.513 "bdev_lvol_create", 00:05:18.513 "bdev_lvol_delete_lvstore", 00:05:18.513 "bdev_lvol_rename_lvstore", 00:05:18.513 "bdev_lvol_create_lvstore", 00:05:18.513 "bdev_raid_set_options", 00:05:18.513 "bdev_raid_remove_base_bdev", 00:05:18.513 "bdev_raid_add_base_bdev", 00:05:18.513 "bdev_raid_delete", 00:05:18.513 "bdev_raid_create", 00:05:18.513 "bdev_raid_get_bdevs", 00:05:18.513 "bdev_error_inject_error", 00:05:18.513 "bdev_error_delete", 00:05:18.513 "bdev_error_create", 00:05:18.513 "bdev_split_delete", 00:05:18.513 "bdev_split_create", 00:05:18.513 "bdev_delay_delete", 00:05:18.513 "bdev_delay_create", 00:05:18.513 "bdev_delay_update_latency", 00:05:18.513 "bdev_zone_block_delete", 00:05:18.513 "bdev_zone_block_create", 00:05:18.513 "blobfs_create", 00:05:18.513 "blobfs_detect", 00:05:18.513 "blobfs_set_cache_size", 00:05:18.513 "bdev_aio_delete", 00:05:18.513 "bdev_aio_rescan", 00:05:18.513 "bdev_aio_create", 00:05:18.513 "bdev_ftl_set_property", 00:05:18.513 "bdev_ftl_get_properties", 00:05:18.513 "bdev_ftl_get_stats", 00:05:18.513 "bdev_ftl_unmap", 00:05:18.513 "bdev_ftl_unload", 00:05:18.513 "bdev_ftl_delete", 00:05:18.513 "bdev_ftl_load", 00:05:18.513 "bdev_ftl_create", 00:05:18.513 "bdev_virtio_attach_controller", 00:05:18.513 "bdev_virtio_scsi_get_devices", 00:05:18.513 "bdev_virtio_detach_controller", 00:05:18.513 "bdev_virtio_blk_set_hotplug", 00:05:18.513 "bdev_iscsi_delete", 00:05:18.513 "bdev_iscsi_create", 00:05:18.513 "bdev_iscsi_set_options", 00:05:18.513 "accel_error_inject_error", 00:05:18.513 "ioat_scan_accel_module", 00:05:18.513 "dsa_scan_accel_module", 00:05:18.513 "iaa_scan_accel_module", 00:05:18.513 "vfu_virtio_create_scsi_endpoint", 00:05:18.513 "vfu_virtio_scsi_remove_target", 00:05:18.513 "vfu_virtio_scsi_add_target", 00:05:18.513 "vfu_virtio_create_blk_endpoint", 00:05:18.513 "vfu_virtio_delete_endpoint", 00:05:18.513 "keyring_file_remove_key", 00:05:18.513 "keyring_file_add_key", 00:05:18.513 "keyring_linux_set_options", 00:05:18.513 "iscsi_get_histogram", 00:05:18.513 "iscsi_enable_histogram", 00:05:18.513 "iscsi_set_options", 00:05:18.513 "iscsi_get_auth_groups", 00:05:18.513 "iscsi_auth_group_remove_secret", 00:05:18.513 "iscsi_auth_group_add_secret", 00:05:18.513 "iscsi_delete_auth_group", 00:05:18.513 "iscsi_create_auth_group", 00:05:18.513 "iscsi_set_discovery_auth", 00:05:18.513 "iscsi_get_options", 00:05:18.513 "iscsi_target_node_request_logout", 00:05:18.513 "iscsi_target_node_set_redirect", 00:05:18.513 "iscsi_target_node_set_auth", 00:05:18.513 "iscsi_target_node_add_lun", 00:05:18.513 "iscsi_get_stats", 00:05:18.513 "iscsi_get_connections", 00:05:18.513 "iscsi_portal_group_set_auth", 00:05:18.513 "iscsi_start_portal_group", 00:05:18.513 "iscsi_delete_portal_group", 00:05:18.513 "iscsi_create_portal_group", 00:05:18.513 "iscsi_get_portal_groups", 00:05:18.513 "iscsi_delete_target_node", 00:05:18.513 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.513 "iscsi_target_node_add_pg_ig_maps", 00:05:18.513 "iscsi_create_target_node", 00:05:18.513 "iscsi_get_target_nodes", 00:05:18.513 "iscsi_delete_initiator_group", 00:05:18.513 "iscsi_initiator_group_remove_initiators", 00:05:18.513 "iscsi_initiator_group_add_initiators", 00:05:18.513 "iscsi_create_initiator_group", 00:05:18.513 "iscsi_get_initiator_groups", 00:05:18.513 "nvmf_set_crdt", 00:05:18.513 "nvmf_set_config", 00:05:18.513 "nvmf_set_max_subsystems", 00:05:18.513 "nvmf_stop_mdns_prr", 00:05:18.513 "nvmf_publish_mdns_prr", 00:05:18.513 "nvmf_subsystem_get_listeners", 00:05:18.513 "nvmf_subsystem_get_qpairs", 00:05:18.513 "nvmf_subsystem_get_controllers", 00:05:18.513 "nvmf_get_stats", 00:05:18.513 "nvmf_get_transports", 00:05:18.513 "nvmf_create_transport", 00:05:18.513 "nvmf_get_targets", 00:05:18.513 "nvmf_delete_target", 00:05:18.513 "nvmf_create_target", 00:05:18.513 "nvmf_subsystem_allow_any_host", 00:05:18.513 "nvmf_subsystem_remove_host", 00:05:18.513 "nvmf_subsystem_add_host", 00:05:18.513 "nvmf_ns_remove_host", 00:05:18.513 "nvmf_ns_add_host", 00:05:18.513 "nvmf_subsystem_remove_ns", 00:05:18.513 "nvmf_subsystem_add_ns", 00:05:18.513 "nvmf_subsystem_listener_set_ana_state", 00:05:18.513 "nvmf_discovery_get_referrals", 00:05:18.513 "nvmf_discovery_remove_referral", 00:05:18.513 "nvmf_discovery_add_referral", 00:05:18.513 "nvmf_subsystem_remove_listener", 00:05:18.513 "nvmf_subsystem_add_listener", 00:05:18.513 "nvmf_delete_subsystem", 00:05:18.513 "nvmf_create_subsystem", 00:05:18.513 "nvmf_get_subsystems", 00:05:18.513 "env_dpdk_get_mem_stats", 00:05:18.513 "nbd_get_disks", 00:05:18.513 "nbd_stop_disk", 00:05:18.513 "nbd_start_disk", 00:05:18.513 "ublk_recover_disk", 00:05:18.513 "ublk_get_disks", 00:05:18.513 "ublk_stop_disk", 00:05:18.513 "ublk_start_disk", 00:05:18.513 "ublk_destroy_target", 00:05:18.513 "ublk_create_target", 00:05:18.513 "virtio_blk_create_transport", 00:05:18.513 "virtio_blk_get_transports", 00:05:18.513 "vhost_controller_set_coalescing", 00:05:18.513 "vhost_get_controllers", 00:05:18.513 "vhost_delete_controller", 00:05:18.513 "vhost_create_blk_controller", 00:05:18.513 "vhost_scsi_controller_remove_target", 00:05:18.513 "vhost_scsi_controller_add_target", 00:05:18.513 "vhost_start_scsi_controller", 00:05:18.513 "vhost_create_scsi_controller", 00:05:18.513 "thread_set_cpumask", 00:05:18.513 "framework_get_governor", 00:05:18.513 "framework_get_scheduler", 00:05:18.513 "framework_set_scheduler", 00:05:18.513 "framework_get_reactors", 00:05:18.513 "thread_get_io_channels", 00:05:18.513 "thread_get_pollers", 00:05:18.513 "thread_get_stats", 00:05:18.513 "framework_monitor_context_switch", 00:05:18.513 "spdk_kill_instance", 00:05:18.513 "log_enable_timestamps", 00:05:18.513 "log_get_flags", 00:05:18.513 "log_clear_flag", 00:05:18.513 "log_set_flag", 00:05:18.513 "log_get_level", 00:05:18.513 "log_set_level", 00:05:18.513 "log_get_print_level", 00:05:18.513 "log_set_print_level", 00:05:18.513 "framework_enable_cpumask_locks", 00:05:18.513 "framework_disable_cpumask_locks", 00:05:18.513 "framework_wait_init", 00:05:18.513 "framework_start_init", 00:05:18.513 "scsi_get_devices", 00:05:18.513 "bdev_get_histogram", 00:05:18.513 "bdev_enable_histogram", 00:05:18.513 "bdev_set_qos_limit", 00:05:18.513 "bdev_set_qd_sampling_period", 00:05:18.513 "bdev_get_bdevs", 00:05:18.513 "bdev_reset_iostat", 00:05:18.513 "bdev_get_iostat", 00:05:18.513 "bdev_examine", 00:05:18.513 "bdev_wait_for_examine", 00:05:18.513 "bdev_set_options", 00:05:18.513 "notify_get_notifications", 00:05:18.513 "notify_get_types", 00:05:18.513 "accel_get_stats", 00:05:18.513 "accel_set_options", 00:05:18.513 "accel_set_driver", 00:05:18.513 "accel_crypto_key_destroy", 00:05:18.513 "accel_crypto_keys_get", 00:05:18.513 "accel_crypto_key_create", 00:05:18.513 "accel_assign_opc", 00:05:18.513 "accel_get_module_info", 00:05:18.513 "accel_get_opc_assignments", 00:05:18.513 "vmd_rescan", 00:05:18.513 "vmd_remove_device", 00:05:18.513 "vmd_enable", 00:05:18.513 "sock_get_default_impl", 00:05:18.513 "sock_set_default_impl", 00:05:18.513 "sock_impl_set_options", 00:05:18.513 "sock_impl_get_options", 00:05:18.513 "iobuf_get_stats", 00:05:18.513 "iobuf_set_options", 00:05:18.513 "keyring_get_keys", 00:05:18.513 "framework_get_pci_devices", 00:05:18.513 "framework_get_config", 00:05:18.513 "framework_get_subsystems", 00:05:18.513 "vfu_tgt_set_base_path", 00:05:18.513 "trace_get_info", 00:05:18.513 "trace_get_tpoint_group_mask", 00:05:18.513 "trace_disable_tpoint_group", 00:05:18.513 "trace_enable_tpoint_group", 00:05:18.513 "trace_clear_tpoint_mask", 00:05:18.513 "trace_set_tpoint_mask", 00:05:18.513 "spdk_get_version", 00:05:18.513 "rpc_get_methods" 00:05:18.513 ] 00:05:18.513 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.513 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.513 14:32:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2150157 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2150157 ']' 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2150157 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2150157 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.513 14:32:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.514 14:32:38 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2150157' 00:05:18.514 killing process with pid 2150157 00:05:18.514 14:32:38 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2150157 00:05:18.514 14:32:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2150157 00:05:18.774 00:05:18.774 real 0m1.506s 00:05:18.774 user 0m2.844s 00:05:18.774 sys 0m0.406s 00:05:18.774 14:32:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.774 14:32:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.774 ************************************ 00:05:18.774 END TEST spdkcli_tcp 00:05:18.774 ************************************ 00:05:18.774 14:32:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.774 14:32:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.774 14:32:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.774 14:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.774 14:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:18.774 ************************************ 00:05:18.774 START TEST dpdk_mem_utility 00:05:18.774 ************************************ 00:05:18.774 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.035 * Looking for test storage... 00:05:19.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.035 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.035 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2150469 00:05:19.035 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.035 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2150469 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2150469 ']' 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.035 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.035 [2024-07-25 14:32:39.189061] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:19.035 [2024-07-25 14:32:39.189110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150469 ] 00:05:19.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.035 [2024-07-25 14:32:39.242380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.035 [2024-07-25 14:32:39.314939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.976 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.976 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:19.976 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.976 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.976 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.976 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.976 { 00:05:19.976 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.976 } 00:05:19.976 14:32:39 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.976 14:32:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.976 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:19.976 1 heaps totaling size 814.000000 MiB 00:05:19.976 size: 814.000000 MiB heap id: 0 00:05:19.976 end heaps---------- 00:05:19.976 8 mempools totaling size 598.116089 MiB 00:05:19.976 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.976 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.976 size: 84.521057 MiB name: bdev_io_2150469 00:05:19.976 size: 51.011292 MiB name: evtpool_2150469 00:05:19.976 size: 50.003479 MiB name: msgpool_2150469 00:05:19.976 size: 21.763794 MiB name: PDU_Pool 00:05:19.976 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.976 size: 0.026123 MiB name: Session_Pool 00:05:19.976 end mempools------- 00:05:19.976 6 memzones totaling size 4.142822 MiB 00:05:19.976 size: 1.000366 MiB name: RG_ring_0_2150469 00:05:19.976 size: 1.000366 MiB name: RG_ring_1_2150469 00:05:19.976 size: 1.000366 MiB name: RG_ring_4_2150469 00:05:19.976 size: 1.000366 MiB name: RG_ring_5_2150469 00:05:19.976 size: 0.125366 MiB name: RG_ring_2_2150469 00:05:19.976 size: 0.015991 MiB name: RG_ring_3_2150469 00:05:19.976 end memzones------- 00:05:19.976 14:32:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.976 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:19.976 list of free elements. size: 12.519348 MiB 00:05:19.976 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:19.976 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:19.976 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:19.976 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:19.976 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:19.976 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:19.976 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:19.976 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:19.976 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:19.976 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:19.976 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:19.976 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:19.976 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:19.976 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:19.976 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:19.976 list of standard malloc elements. size: 199.218079 MiB 00:05:19.976 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:19.976 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:19.976 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:19.976 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:19.976 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:19.976 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.976 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:19.976 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.976 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:19.976 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:19.976 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:19.976 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:19.976 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:19.976 list of memzone associated elements. size: 602.262573 MiB 00:05:19.976 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:19.976 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.976 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:19.976 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.977 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:19.977 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2150469_0 00:05:19.977 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:19.977 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2150469_0 00:05:19.977 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:19.977 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2150469_0 00:05:19.977 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:19.977 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.977 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:19.977 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.977 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:19.977 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2150469 00:05:19.977 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:19.977 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2150469 00:05:19.977 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.977 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2150469 00:05:19.977 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:19.977 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.977 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:19.977 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.977 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:19.977 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.977 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:19.977 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.977 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:19.977 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2150469 00:05:19.977 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:19.977 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2150469 00:05:19.977 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:19.977 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2150469 00:05:19.977 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:19.977 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2150469 00:05:19.977 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:19.977 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2150469 00:05:19.977 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:19.977 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.977 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:19.977 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.977 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:19.977 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.977 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:19.977 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2150469 00:05:19.977 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:19.977 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.977 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:19.977 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.977 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:19.977 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2150469 00:05:19.977 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:19.977 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.977 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:19.977 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2150469 00:05:19.977 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:19.977 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2150469 00:05:19.977 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:19.977 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.977 14:32:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.977 14:32:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2150469 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2150469 ']' 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2150469 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2150469 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2150469' 00:05:19.977 killing process with pid 2150469 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2150469 00:05:19.977 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2150469 00:05:20.237 00:05:20.237 real 0m1.389s 00:05:20.237 user 0m1.468s 00:05:20.237 sys 0m0.378s 00:05:20.237 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.237 14:32:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.237 ************************************ 00:05:20.237 END TEST dpdk_mem_utility 00:05:20.237 ************************************ 00:05:20.237 14:32:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.237 14:32:40 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.237 14:32:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.238 14:32:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.238 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.238 ************************************ 00:05:20.238 START TEST event 00:05:20.238 ************************************ 00:05:20.238 14:32:40 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.498 * Looking for test storage... 00:05:20.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.498 14:32:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.498 14:32:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.498 14:32:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.498 14:32:40 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:20.498 14:32:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.498 14:32:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.498 ************************************ 00:05:20.498 START TEST event_perf 00:05:20.498 ************************************ 00:05:20.498 14:32:40 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.498 Running I/O for 1 seconds...[2024-07-25 14:32:40.635539] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:20.498 [2024-07-25 14:32:40.635607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150762 ] 00:05:20.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.498 [2024-07-25 14:32:40.699374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.498 [2024-07-25 14:32:40.774573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.498 [2024-07-25 14:32:40.774672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.498 [2024-07-25 14:32:40.774759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.498 [2024-07-25 14:32:40.774761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.877 Running I/O for 1 seconds... 00:05:21.877 lcore 0: 206663 00:05:21.877 lcore 1: 206661 00:05:21.877 lcore 2: 206663 00:05:21.877 lcore 3: 206664 00:05:21.877 done. 00:05:21.877 00:05:21.877 real 0m1.229s 00:05:21.877 user 0m4.143s 00:05:21.877 sys 0m0.079s 00:05:21.877 14:32:41 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.877 14:32:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.877 ************************************ 00:05:21.877 END TEST event_perf 00:05:21.877 ************************************ 00:05:21.877 14:32:41 event -- common/autotest_common.sh@1142 -- # return 0 00:05:21.877 14:32:41 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.877 14:32:41 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:21.877 14:32:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.877 14:32:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.877 ************************************ 00:05:21.877 START TEST event_reactor 00:05:21.877 ************************************ 00:05:21.877 14:32:41 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.877 [2024-07-25 14:32:41.936994] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:21.877 [2024-07-25 14:32:41.937064] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151015 ] 00:05:21.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.877 [2024-07-25 14:32:41.995824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.877 [2024-07-25 14:32:42.067987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.261 test_start 00:05:23.261 oneshot 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 500 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 tick 250 00:05:23.261 tick 100 00:05:23.261 tick 100 00:05:23.261 test_end 00:05:23.261 00:05:23.261 real 0m1.223s 00:05:23.261 user 0m1.149s 00:05:23.261 sys 0m0.070s 00:05:23.261 14:32:43 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.261 14:32:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.261 ************************************ 00:05:23.261 END TEST event_reactor 00:05:23.261 ************************************ 00:05:23.261 14:32:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:23.261 14:32:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.261 14:32:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:23.261 14:32:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.261 14:32:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.261 ************************************ 00:05:23.261 START TEST event_reactor_perf 00:05:23.261 ************************************ 00:05:23.261 14:32:43 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.261 [2024-07-25 14:32:43.230574] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:23.261 [2024-07-25 14:32:43.230638] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151261 ] 00:05:23.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.261 [2024-07-25 14:32:43.290060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.262 [2024-07-25 14:32:43.361618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.202 test_start 00:05:24.202 test_end 00:05:24.202 Performance: 500652 events per second 00:05:24.202 00:05:24.202 real 0m1.222s 00:05:24.202 user 0m1.140s 00:05:24.202 sys 0m0.077s 00:05:24.202 14:32:44 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.202 14:32:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.202 ************************************ 00:05:24.202 END TEST event_reactor_perf 00:05:24.202 ************************************ 00:05:24.202 14:32:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.202 14:32:44 event -- event/event.sh@49 -- # uname -s 00:05:24.202 14:32:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.202 14:32:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.202 14:32:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.203 14:32:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.203 14:32:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.463 ************************************ 00:05:24.463 START TEST event_scheduler 00:05:24.463 ************************************ 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.463 * Looking for test storage... 00:05:24.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.463 14:32:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.463 14:32:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2151535 00:05:24.463 14:32:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.463 14:32:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.463 14:32:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2151535 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2151535 ']' 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.463 14:32:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.463 [2024-07-25 14:32:44.639596] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:24.463 [2024-07-25 14:32:44.639641] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151535 ] 00:05:24.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.463 [2024-07-25 14:32:44.690327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.723 [2024-07-25 14:32:44.774446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.723 [2024-07-25 14:32:44.774539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.723 [2024-07-25 14:32:44.774640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.723 [2024-07-25 14:32:44.774642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:25.292 14:32:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.292 [2024-07-25 14:32:45.465002] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:25.292 [2024-07-25 14:32:45.465022] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.292 [2024-07-25 14:32:45.465032] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.292 [2024-07-25 14:32:45.465037] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.292 [2024-07-25 14:32:45.465048] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.292 14:32:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.292 [2024-07-25 14:32:45.537550] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.292 14:32:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.292 14:32:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.292 ************************************ 00:05:25.292 START TEST scheduler_create_thread 00:05:25.292 ************************************ 00:05:25.292 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:25.292 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.292 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.292 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 2 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 3 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 4 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 5 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 6 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 7 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 8 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 9 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 10 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.552 14:32:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.932 14:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.932 14:32:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.932 14:32:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.932 14:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.932 14:32:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.311 14:32:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.311 00:05:28.311 real 0m2.619s 00:05:28.311 user 0m0.022s 00:05:28.311 sys 0m0.006s 00:05:28.311 14:32:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.311 14:32:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.311 ************************************ 00:05:28.311 END TEST scheduler_create_thread 00:05:28.311 ************************************ 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:28.311 14:32:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.311 14:32:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2151535 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2151535 ']' 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2151535 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2151535 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2151535' 00:05:28.311 killing process with pid 2151535 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2151535 00:05:28.311 14:32:48 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2151535 00:05:28.571 [2024-07-25 14:32:48.671674] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.571 00:05:28.571 real 0m4.357s 00:05:28.571 user 0m8.276s 00:05:28.571 sys 0m0.359s 00:05:28.831 14:32:48 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.831 14:32:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.831 ************************************ 00:05:28.831 END TEST event_scheduler 00:05:28.831 ************************************ 00:05:28.831 14:32:48 event -- common/autotest_common.sh@1142 -- # return 0 00:05:28.831 14:32:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.831 14:32:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.831 14:32:48 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.831 14:32:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.831 14:32:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.831 ************************************ 00:05:28.831 START TEST app_repeat 00:05:28.831 ************************************ 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2152286 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2152286' 00:05:28.831 Process app_repeat pid: 2152286 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.831 spdk_app_start Round 0 00:05:28.831 14:32:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2152286 /var/tmp/spdk-nbd.sock 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2152286 ']' 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.831 14:32:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.831 [2024-07-25 14:32:48.970210] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:28.831 [2024-07-25 14:32:48.970271] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152286 ] 00:05:28.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.831 [2024-07-25 14:32:49.026219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.832 [2024-07-25 14:32:49.105466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.832 [2024-07-25 14:32:49.105470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.770 14:32:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.770 14:32:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.770 14:32:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.770 Malloc0 00:05:29.770 14:32:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.030 Malloc1 00:05:30.030 14:32:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.030 14:32:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.289 /dev/nbd0 00:05:30.289 14:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.290 1+0 records in 00:05:30.290 1+0 records out 00:05:30.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174014 s, 23.5 MB/s 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.290 /dev/nbd1 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.290 14:32:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.290 14:32:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.549 1+0 records in 00:05:30.549 1+0 records out 00:05:30.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218023 s, 18.8 MB/s 00:05:30.549 14:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.549 14:32:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.549 14:32:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.549 14:32:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.549 14:32:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.549 { 00:05:30.549 "nbd_device": "/dev/nbd0", 00:05:30.549 "bdev_name": "Malloc0" 00:05:30.549 }, 00:05:30.549 { 00:05:30.549 "nbd_device": "/dev/nbd1", 00:05:30.549 "bdev_name": "Malloc1" 00:05:30.549 } 00:05:30.549 ]' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.549 { 00:05:30.549 "nbd_device": "/dev/nbd0", 00:05:30.549 "bdev_name": "Malloc0" 00:05:30.549 }, 00:05:30.549 { 00:05:30.549 "nbd_device": "/dev/nbd1", 00:05:30.549 "bdev_name": "Malloc1" 00:05:30.549 } 00:05:30.549 ]' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.549 /dev/nbd1' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.549 /dev/nbd1' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.549 256+0 records in 00:05:30.549 256+0 records out 00:05:30.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103586 s, 101 MB/s 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.549 14:32:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.809 256+0 records in 00:05:30.809 256+0 records out 00:05:30.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143731 s, 73.0 MB/s 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.809 256+0 records in 00:05:30.809 256+0 records out 00:05:30.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146804 s, 71.4 MB/s 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.809 14:32:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.809 14:32:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.068 14:32:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.327 14:32:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.327 14:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.327 14:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.327 14:32:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.328 14:32:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.328 14:32:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.587 14:32:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.587 [2024-07-25 14:32:51.858007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.847 [2024-07-25 14:32:51.927842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.847 [2024-07-25 14:32:51.927844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.847 [2024-07-25 14:32:51.968477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.847 [2024-07-25 14:32:51.968516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.394 14:32:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.395 14:32:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.395 spdk_app_start Round 1 00:05:34.395 14:32:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2152286 /var/tmp/spdk-nbd.sock 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2152286 ']' 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.395 14:32:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.658 14:32:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.658 14:32:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.658 14:32:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.992 Malloc0 00:05:34.992 14:32:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.992 Malloc1 00:05:34.992 14:32:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.992 14:32:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.252 /dev/nbd0 00:05:35.252 14:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.252 14:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.252 1+0 records in 00:05:35.252 1+0 records out 00:05:35.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225628 s, 18.2 MB/s 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.252 14:32:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.252 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.252 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.252 14:32:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.511 /dev/nbd1 00:05:35.511 14:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.511 14:32:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.511 1+0 records in 00:05:35.511 1+0 records out 00:05:35.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001991 s, 20.6 MB/s 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.511 14:32:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.511 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.511 14:32:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.512 { 00:05:35.512 "nbd_device": "/dev/nbd0", 00:05:35.512 "bdev_name": "Malloc0" 00:05:35.512 }, 00:05:35.512 { 00:05:35.512 "nbd_device": "/dev/nbd1", 00:05:35.512 "bdev_name": "Malloc1" 00:05:35.512 } 00:05:35.512 ]' 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.512 { 00:05:35.512 "nbd_device": "/dev/nbd0", 00:05:35.512 "bdev_name": "Malloc0" 00:05:35.512 }, 00:05:35.512 { 00:05:35.512 "nbd_device": "/dev/nbd1", 00:05:35.512 "bdev_name": "Malloc1" 00:05:35.512 } 00:05:35.512 ]' 00:05:35.512 14:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.772 /dev/nbd1' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.772 /dev/nbd1' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.772 256+0 records in 00:05:35.772 256+0 records out 00:05:35.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104444 s, 100 MB/s 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.772 256+0 records in 00:05:35.772 256+0 records out 00:05:35.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140318 s, 74.7 MB/s 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.772 256+0 records in 00:05:35.772 256+0 records out 00:05:35.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148119 s, 70.8 MB/s 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.772 14:32:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.032 14:32:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.292 14:32:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.292 14:32:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.552 14:32:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.812 [2024-07-25 14:32:56.854175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.812 [2024-07-25 14:32:56.922125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.812 [2024-07-25 14:32:56.922128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.812 [2024-07-25 14:32:56.963630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.812 [2024-07-25 14:32:56.963669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.104 14:32:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.104 14:32:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.104 spdk_app_start Round 2 00:05:40.104 14:32:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2152286 /var/tmp/spdk-nbd.sock 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2152286 ']' 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.104 14:32:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.104 14:32:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.104 Malloc0 00:05:40.104 14:33:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.104 Malloc1 00:05:40.104 14:33:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.104 /dev/nbd0 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.104 14:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.104 14:33:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:40.104 14:33:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.104 14:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.104 14:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.104 14:33:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.365 1+0 records in 00:05:40.365 1+0 records out 00:05:40.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165661 s, 24.7 MB/s 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.365 /dev/nbd1 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.365 1+0 records in 00:05:40.365 1+0 records out 00:05:40.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021652 s, 18.9 MB/s 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.365 14:33:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.365 14:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.625 { 00:05:40.625 "nbd_device": "/dev/nbd0", 00:05:40.625 "bdev_name": "Malloc0" 00:05:40.625 }, 00:05:40.625 { 00:05:40.625 "nbd_device": "/dev/nbd1", 00:05:40.625 "bdev_name": "Malloc1" 00:05:40.625 } 00:05:40.625 ]' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.625 { 00:05:40.625 "nbd_device": "/dev/nbd0", 00:05:40.625 "bdev_name": "Malloc0" 00:05:40.625 }, 00:05:40.625 { 00:05:40.625 "nbd_device": "/dev/nbd1", 00:05:40.625 "bdev_name": "Malloc1" 00:05:40.625 } 00:05:40.625 ]' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.625 /dev/nbd1' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.625 /dev/nbd1' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.625 256+0 records in 00:05:40.625 256+0 records out 00:05:40.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103143 s, 102 MB/s 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.625 256+0 records in 00:05:40.625 256+0 records out 00:05:40.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135736 s, 77.3 MB/s 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.625 256+0 records in 00:05:40.625 256+0 records out 00:05:40.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144765 s, 72.4 MB/s 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.625 14:33:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.885 14:33:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.145 14:33:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.405 14:33:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.405 14:33:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.405 14:33:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.665 [2024-07-25 14:33:01.877653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.665 [2024-07-25 14:33:01.944934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.665 [2024-07-25 14:33:01.944937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.924 [2024-07-25 14:33:01.985578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.924 [2024-07-25 14:33:01.985618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.459 14:33:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2152286 /var/tmp/spdk-nbd.sock 00:05:44.459 14:33:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2152286 ']' 00:05:44.459 14:33:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.459 14:33:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.460 14:33:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.460 14:33:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.460 14:33:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:44.719 14:33:04 event.app_repeat -- event/event.sh@39 -- # killprocess 2152286 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2152286 ']' 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2152286 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2152286 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2152286' 00:05:44.719 killing process with pid 2152286 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2152286 00:05:44.719 14:33:04 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2152286 00:05:44.978 spdk_app_start is called in Round 0. 00:05:44.978 Shutdown signal received, stop current app iteration 00:05:44.978 Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 reinitialization... 00:05:44.978 spdk_app_start is called in Round 1. 00:05:44.978 Shutdown signal received, stop current app iteration 00:05:44.978 Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 reinitialization... 00:05:44.978 spdk_app_start is called in Round 2. 00:05:44.978 Shutdown signal received, stop current app iteration 00:05:44.978 Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 reinitialization... 00:05:44.978 spdk_app_start is called in Round 3. 00:05:44.978 Shutdown signal received, stop current app iteration 00:05:44.978 14:33:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.978 14:33:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.978 00:05:44.978 real 0m16.148s 00:05:44.978 user 0m35.032s 00:05:44.978 sys 0m2.411s 00:05:44.978 14:33:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.978 14:33:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.978 ************************************ 00:05:44.978 END TEST app_repeat 00:05:44.978 ************************************ 00:05:44.978 14:33:05 event -- common/autotest_common.sh@1142 -- # return 0 00:05:44.978 14:33:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.978 14:33:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.978 14:33:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.978 14:33:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.978 14:33:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.978 ************************************ 00:05:44.978 START TEST cpu_locks 00:05:44.978 ************************************ 00:05:44.978 14:33:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.978 * Looking for test storage... 00:05:44.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.979 14:33:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.979 14:33:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.979 14:33:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.979 14:33:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.979 14:33:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.979 14:33:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.979 14:33:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.237 ************************************ 00:05:45.237 START TEST default_locks 00:05:45.237 ************************************ 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2155395 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2155395 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2155395 ']' 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.237 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.238 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.238 14:33:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.238 14:33:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 [2024-07-25 14:33:05.320975] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:45.238 [2024-07-25 14:33:05.321018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155395 ] 00:05:45.238 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.238 [2024-07-25 14:33:05.375004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.238 [2024-07-25 14:33:05.454514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.170 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2155395 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2155395 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.171 lslocks: write error 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2155395 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2155395 ']' 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2155395 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.171 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2155395 00:05:46.429 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.429 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.429 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2155395' 00:05:46.429 killing process with pid 2155395 00:05:46.429 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2155395 00:05:46.429 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2155395 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2155395 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2155395 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2155395 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2155395 ']' 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2155395) - No such process 00:05:46.688 ERROR: process (pid: 2155395) is no longer running 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.688 00:05:46.688 real 0m1.527s 00:05:46.688 user 0m1.594s 00:05:46.688 sys 0m0.508s 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.688 14:33:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.688 ************************************ 00:05:46.688 END TEST default_locks 00:05:46.688 ************************************ 00:05:46.688 14:33:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.688 14:33:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.688 14:33:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.688 14:33:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.688 14:33:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.688 ************************************ 00:05:46.688 START TEST default_locks_via_rpc 00:05:46.688 ************************************ 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2155656 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2155656 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2155656 ']' 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.688 14:33:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.688 [2024-07-25 14:33:06.913229] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:46.689 [2024-07-25 14:33:06.913269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155656 ] 00:05:46.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.689 [2024-07-25 14:33:06.965813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.948 [2024-07-25 14:33:07.047910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2155656 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2155656 00:05:47.516 14:33:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2155656 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2155656 ']' 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2155656 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2155656 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2155656' 00:05:48.083 killing process with pid 2155656 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2155656 00:05:48.083 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2155656 00:05:48.342 00:05:48.342 real 0m1.568s 00:05:48.342 user 0m1.648s 00:05:48.342 sys 0m0.500s 00:05:48.342 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.342 14:33:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.342 ************************************ 00:05:48.342 END TEST default_locks_via_rpc 00:05:48.342 ************************************ 00:05:48.342 14:33:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:48.342 14:33:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.342 14:33:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.342 14:33:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.342 14:33:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.342 ************************************ 00:05:48.342 START TEST non_locking_app_on_locked_coremask 00:05:48.342 ************************************ 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2156084 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2156084 /var/tmp/spdk.sock 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2156084 ']' 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.342 14:33:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.342 [2024-07-25 14:33:08.526463] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:48.342 [2024-07-25 14:33:08.526500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156084 ] 00:05:48.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.342 [2024-07-25 14:33:08.580267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.601 [2024-07-25 14:33:08.658927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2156448 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2156448 /var/tmp/spdk2.sock 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2156448 ']' 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:49.168 14:33:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.168 [2024-07-25 14:33:09.390749] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:49.168 [2024-07-25 14:33:09.390800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156448 ] 00:05:49.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.427 [2024-07-25 14:33:09.466498] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.427 [2024-07-25 14:33:09.466524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.427 [2024-07-25 14:33:09.619723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.994 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.994 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.994 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2156084 00:05:49.994 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2156084 00:05:49.994 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.253 lslocks: write error 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2156084 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2156084 ']' 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2156084 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2156084 00:05:50.253 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.254 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.254 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2156084' 00:05:50.254 killing process with pid 2156084 00:05:50.254 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2156084 00:05:50.254 14:33:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2156084 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2156448 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2156448 ']' 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2156448 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2156448 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2156448' 00:05:51.190 killing process with pid 2156448 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2156448 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2156448 00:05:51.190 00:05:51.190 real 0m2.991s 00:05:51.190 user 0m3.216s 00:05:51.190 sys 0m0.819s 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.190 14:33:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.190 ************************************ 00:05:51.190 END TEST non_locking_app_on_locked_coremask 00:05:51.190 ************************************ 00:05:51.448 14:33:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.448 14:33:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.448 14:33:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.449 14:33:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.449 14:33:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 ************************************ 00:05:51.449 START TEST locking_app_on_unlocked_coremask 00:05:51.449 ************************************ 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2156818 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2156818 /var/tmp/spdk.sock 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2156818 ']' 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.449 14:33:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 [2024-07-25 14:33:11.599373] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:51.449 [2024-07-25 14:33:11.599420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156818 ] 00:05:51.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.449 [2024-07-25 14:33:11.655016] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.449 [2024-07-25 14:33:11.655056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.449 [2024-07-25 14:33:11.727406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2157048 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2157048 /var/tmp/spdk2.sock 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2157048 ']' 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.385 14:33:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.385 [2024-07-25 14:33:12.447135] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:52.385 [2024-07-25 14:33:12.447179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157048 ] 00:05:52.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.385 [2024-07-25 14:33:12.522083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.385 [2024-07-25 14:33:12.667564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2157048 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2157048 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.321 lslocks: write error 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2156818 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2156818 ']' 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2156818 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2156818 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2156818' 00:05:53.321 killing process with pid 2156818 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2156818 00:05:53.321 14:33:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2156818 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2157048 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2157048 ']' 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2157048 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.888 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2157048 00:05:54.161 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.161 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.161 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2157048' 00:05:54.161 killing process with pid 2157048 00:05:54.161 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2157048 00:05:54.161 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2157048 00:05:54.449 00:05:54.449 real 0m2.954s 00:05:54.449 user 0m3.160s 00:05:54.449 sys 0m0.813s 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.449 ************************************ 00:05:54.449 END TEST locking_app_on_unlocked_coremask 00:05:54.449 ************************************ 00:05:54.449 14:33:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.449 14:33:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.449 14:33:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.449 14:33:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.449 14:33:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.449 ************************************ 00:05:54.449 START TEST locking_app_on_locked_coremask 00:05:54.449 ************************************ 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2157450 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2157450 /var/tmp/spdk.sock 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2157450 ']' 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.449 14:33:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.449 [2024-07-25 14:33:14.623834] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:54.449 [2024-07-25 14:33:14.623878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157450 ] 00:05:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.449 [2024-07-25 14:33:14.676538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.708 [2024-07-25 14:33:14.758296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2157551 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2157551 /var/tmp/spdk2.sock 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2157551 /var/tmp/spdk2.sock 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2157551 /var/tmp/spdk2.sock 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2157551 ']' 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.276 14:33:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.276 [2024-07-25 14:33:15.448806] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:55.276 [2024-07-25 14:33:15.448851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157551 ] 00:05:55.276 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.276 [2024-07-25 14:33:15.526604] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2157450 has claimed it. 00:05:55.276 [2024-07-25 14:33:15.526640] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2157551) - No such process 00:05:55.845 ERROR: process (pid: 2157551) is no longer running 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2157450 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.845 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2157450 00:05:56.413 lslocks: write error 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2157450 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2157450 ']' 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2157450 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2157450 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2157450' 00:05:56.413 killing process with pid 2157450 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2157450 00:05:56.413 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2157450 00:05:56.673 00:05:56.673 real 0m2.268s 00:05:56.673 user 0m2.513s 00:05:56.673 sys 0m0.578s 00:05:56.673 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.673 14:33:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 ************************************ 00:05:56.673 END TEST locking_app_on_locked_coremask 00:05:56.673 ************************************ 00:05:56.673 14:33:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.673 14:33:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.673 14:33:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.673 14:33:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.673 14:33:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 ************************************ 00:05:56.673 START TEST locking_overlapped_coremask 00:05:56.673 ************************************ 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2157813 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2157813 /var/tmp/spdk.sock 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2157813 ']' 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.673 14:33:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.673 [2024-07-25 14:33:16.948839] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:56.673 [2024-07-25 14:33:16.948884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157813 ] 00:05:56.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.932 [2024-07-25 14:33:17.001141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.932 [2024-07-25 14:33:17.081852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.932 [2024-07-25 14:33:17.081946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.932 [2024-07-25 14:33:17.081946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2158042 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2158042 /var/tmp/spdk2.sock 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2158042 /var/tmp/spdk2.sock 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2158042 /var/tmp/spdk2.sock 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2158042 ']' 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.500 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.501 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.501 14:33:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.760 [2024-07-25 14:33:17.797445] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:57.760 [2024-07-25 14:33:17.797491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158042 ] 00:05:57.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.760 [2024-07-25 14:33:17.873260] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2157813 has claimed it. 00:05:57.760 [2024-07-25 14:33:17.873294] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2158042) - No such process 00:05:58.327 ERROR: process (pid: 2158042) is no longer running 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2157813 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2157813 ']' 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2157813 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2157813 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2157813' 00:05:58.327 killing process with pid 2157813 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2157813 00:05:58.327 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2157813 00:05:58.587 00:05:58.587 real 0m1.882s 00:05:58.587 user 0m5.313s 00:05:58.587 sys 0m0.390s 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.587 ************************************ 00:05:58.587 END TEST locking_overlapped_coremask 00:05:58.587 ************************************ 00:05:58.587 14:33:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.587 14:33:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.587 14:33:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.587 14:33:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.587 14:33:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.587 ************************************ 00:05:58.587 START TEST locking_overlapped_coremask_via_rpc 00:05:58.587 ************************************ 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2158298 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2158298 /var/tmp/spdk.sock 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2158298 ']' 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.587 14:33:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.847 [2024-07-25 14:33:18.901740] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:58.847 [2024-07-25 14:33:18.901783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158298 ] 00:05:58.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.847 [2024-07-25 14:33:18.954983] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.847 [2024-07-25 14:33:18.955006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.847 [2024-07-25 14:33:19.027253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.847 [2024-07-25 14:33:19.027352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.847 [2024-07-25 14:33:19.027352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2158318 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2158318 /var/tmp/spdk2.sock 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2158318 ']' 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.415 14:33:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.673 [2024-07-25 14:33:19.740482] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:05:59.673 [2024-07-25 14:33:19.740533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158318 ] 00:05:59.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.673 [2024-07-25 14:33:19.818412] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.673 [2024-07-25 14:33:19.818442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.932 [2024-07-25 14:33:19.969201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.932 [2024-07-25 14:33:19.969256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.932 [2024-07-25 14:33:19.969257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.499 [2024-07-25 14:33:20.569117] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2158298 has claimed it. 00:06:00.499 request: 00:06:00.499 { 00:06:00.499 "method": "framework_enable_cpumask_locks", 00:06:00.499 "req_id": 1 00:06:00.499 } 00:06:00.499 Got JSON-RPC error response 00:06:00.499 response: 00:06:00.499 { 00:06:00.499 "code": -32603, 00:06:00.499 "message": "Failed to claim CPU core: 2" 00:06:00.499 } 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.499 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2158298 /var/tmp/spdk.sock 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2158298 ']' 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2158318 /var/tmp/spdk2.sock 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2158318 ']' 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.500 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.759 00:06:00.759 real 0m2.112s 00:06:00.759 user 0m0.868s 00:06:00.759 sys 0m0.166s 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.759 14:33:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.759 ************************************ 00:06:00.759 END TEST locking_overlapped_coremask_via_rpc 00:06:00.759 ************************************ 00:06:00.759 14:33:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.759 14:33:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:00.759 14:33:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2158298 ]] 00:06:00.759 14:33:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2158298 00:06:00.759 14:33:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2158298 ']' 00:06:00.759 14:33:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2158298 00:06:00.759 14:33:20 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158298 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158298' 00:06:00.759 killing process with pid 2158298 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2158298 00:06:00.759 14:33:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2158298 00:06:01.325 14:33:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2158318 ]] 00:06:01.325 14:33:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2158318 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2158318 ']' 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2158318 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2158318 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2158318' 00:06:01.325 killing process with pid 2158318 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2158318 00:06:01.325 14:33:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2158318 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2158298 ]] 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2158298 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2158298 ']' 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2158298 00:06:01.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2158298) - No such process 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2158298 is not found' 00:06:01.584 Process with pid 2158298 is not found 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2158318 ]] 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2158318 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2158318 ']' 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2158318 00:06:01.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2158318) - No such process 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2158318 is not found' 00:06:01.584 Process with pid 2158318 is not found 00:06:01.584 14:33:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.584 00:06:01.584 real 0m16.583s 00:06:01.584 user 0m28.940s 00:06:01.584 sys 0m4.666s 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.584 14:33:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.584 ************************************ 00:06:01.584 END TEST cpu_locks 00:06:01.584 ************************************ 00:06:01.584 14:33:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.584 00:06:01.584 real 0m41.270s 00:06:01.584 user 1m18.873s 00:06:01.584 sys 0m8.014s 00:06:01.584 14:33:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.584 14:33:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.584 ************************************ 00:06:01.584 END TEST event 00:06:01.584 ************************************ 00:06:01.584 14:33:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.584 14:33:21 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.584 14:33:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.584 14:33:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.584 14:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.584 ************************************ 00:06:01.584 START TEST thread 00:06:01.584 ************************************ 00:06:01.584 14:33:21 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.843 * Looking for test storage... 00:06:01.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:01.843 14:33:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.843 14:33:21 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:01.843 14:33:21 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.843 14:33:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.843 ************************************ 00:06:01.843 START TEST thread_poller_perf 00:06:01.843 ************************************ 00:06:01.843 14:33:21 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.843 [2024-07-25 14:33:21.970899] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:01.843 [2024-07-25 14:33:21.970969] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158870 ] 00:06:01.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.843 [2024-07-25 14:33:22.028735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.843 [2024-07-25 14:33:22.101849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.843 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.222 ====================================== 00:06:03.222 busy:2309442208 (cyc) 00:06:03.222 total_run_count: 408000 00:06:03.222 tsc_hz: 2300000000 (cyc) 00:06:03.222 ====================================== 00:06:03.222 poller_cost: 5660 (cyc), 2460 (nsec) 00:06:03.222 00:06:03.222 real 0m1.227s 00:06:03.222 user 0m1.147s 00:06:03.222 sys 0m0.075s 00:06:03.222 14:33:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.222 14:33:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.222 ************************************ 00:06:03.222 END TEST thread_poller_perf 00:06:03.222 ************************************ 00:06:03.222 14:33:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:03.222 14:33:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.222 14:33:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:03.222 14:33:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.222 14:33:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.222 ************************************ 00:06:03.222 START TEST thread_poller_perf 00:06:03.222 ************************************ 00:06:03.222 14:33:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.222 [2024-07-25 14:33:23.255503] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:03.222 [2024-07-25 14:33:23.255569] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159119 ] 00:06:03.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.222 [2024-07-25 14:33:23.312708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.222 [2024-07-25 14:33:23.384459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.222 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.602 ====================================== 00:06:04.602 busy:2301682764 (cyc) 00:06:04.602 total_run_count: 5312000 00:06:04.602 tsc_hz: 2300000000 (cyc) 00:06:04.602 ====================================== 00:06:04.602 poller_cost: 433 (cyc), 188 (nsec) 00:06:04.602 00:06:04.602 real 0m1.222s 00:06:04.602 user 0m1.143s 00:06:04.602 sys 0m0.074s 00:06:04.602 14:33:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.602 14:33:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.602 ************************************ 00:06:04.602 END TEST thread_poller_perf 00:06:04.602 ************************************ 00:06:04.602 14:33:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.602 14:33:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.602 00:06:04.602 real 0m2.652s 00:06:04.602 user 0m2.383s 00:06:04.602 sys 0m0.276s 00:06:04.602 14:33:24 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.602 14:33:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.602 ************************************ 00:06:04.602 END TEST thread 00:06:04.602 ************************************ 00:06:04.602 14:33:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.602 14:33:24 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.602 14:33:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.602 14:33:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.602 14:33:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.602 ************************************ 00:06:04.602 START TEST accel 00:06:04.602 ************************************ 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.602 * Looking for test storage... 00:06:04.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:04.602 14:33:24 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:04.602 14:33:24 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:04.602 14:33:24 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.602 14:33:24 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2159411 00:06:04.602 14:33:24 accel -- accel/accel.sh@63 -- # waitforlisten 2159411 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@829 -- # '[' -z 2159411 ']' 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.602 14:33:24 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:04.602 14:33:24 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.602 14:33:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.602 14:33:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.602 14:33:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.602 14:33:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.602 14:33:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.602 14:33:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.602 14:33:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:04.602 14:33:24 accel -- accel/accel.sh@41 -- # jq -r . 00:06:04.602 [2024-07-25 14:33:24.689064] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:04.602 [2024-07-25 14:33:24.689115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159411 ] 00:06:04.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.602 [2024-07-25 14:33:24.742927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.602 [2024-07-25 14:33:24.816205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@862 -- # return 0 00:06:05.541 14:33:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:05.541 14:33:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:05.541 14:33:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:05.541 14:33:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:05.541 14:33:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:05.541 14:33:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:05.541 14:33:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:05.541 14:33:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:05.541 14:33:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:05.541 14:33:25 accel -- accel/accel.sh@75 -- # killprocess 2159411 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@948 -- # '[' -z 2159411 ']' 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@952 -- # kill -0 2159411 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@953 -- # uname 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2159411 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2159411' 00:06:05.541 killing process with pid 2159411 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@967 -- # kill 2159411 00:06:05.541 14:33:25 accel -- common/autotest_common.sh@972 -- # wait 2159411 00:06:05.801 14:33:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:05.801 14:33:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.801 14:33:25 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:05.801 14:33:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:05.801 14:33:25 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.801 14:33:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.801 14:33:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.801 14:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.801 ************************************ 00:06:05.801 START TEST accel_missing_filename 00:06:05.801 ************************************ 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.801 14:33:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:05.801 14:33:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:05.801 [2024-07-25 14:33:26.012253] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:05.801 [2024-07-25 14:33:26.012321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159677 ] 00:06:05.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.801 [2024-07-25 14:33:26.068219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.061 [2024-07-25 14:33:26.143171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.061 [2024-07-25 14:33:26.183890] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.061 [2024-07-25 14:33:26.243871] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:06.061 A filename is required. 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.061 00:06:06.061 real 0m0.332s 00:06:06.061 user 0m0.258s 00:06:06.061 sys 0m0.113s 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.061 14:33:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:06.061 ************************************ 00:06:06.061 END TEST accel_missing_filename 00:06:06.061 ************************************ 00:06:06.061 14:33:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.061 14:33:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.061 14:33:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:06.061 14:33:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.061 14:33:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.321 ************************************ 00:06:06.321 START TEST accel_compress_verify 00:06:06.321 ************************************ 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.321 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:06.321 14:33:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:06.321 [2024-07-25 14:33:26.404615] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:06.321 [2024-07-25 14:33:26.404683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159701 ] 00:06:06.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.321 [2024-07-25 14:33:26.462048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.321 [2024-07-25 14:33:26.533818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.321 [2024-07-25 14:33:26.574723] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.581 [2024-07-25 14:33:26.634786] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:06.581 00:06:06.581 Compression does not support the verify option, aborting. 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.581 00:06:06.581 real 0m0.330s 00:06:06.581 user 0m0.260s 00:06:06.581 sys 0m0.110s 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.581 14:33:26 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:06.581 ************************************ 00:06:06.581 END TEST accel_compress_verify 00:06:06.581 ************************************ 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.581 14:33:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.581 ************************************ 00:06:06.581 START TEST accel_wrong_workload 00:06:06.581 ************************************ 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:06.581 14:33:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:06.581 Unsupported workload type: foobar 00:06:06.581 [2024-07-25 14:33:26.796088] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:06.581 accel_perf options: 00:06:06.581 [-h help message] 00:06:06.581 [-q queue depth per core] 00:06:06.581 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.581 [-T number of threads per core 00:06:06.581 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.581 [-t time in seconds] 00:06:06.581 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.581 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:06.581 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.581 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.581 [-S for crc32c workload, use this seed value (default 0) 00:06:06.581 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.581 [-f for fill workload, use this BYTE value (default 255) 00:06:06.581 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.581 [-y verify result if this switch is on] 00:06:06.581 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.581 Can be used to spread operations across a wider range of memory. 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.581 00:06:06.581 real 0m0.033s 00:06:06.581 user 0m0.020s 00:06:06.581 sys 0m0.013s 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.581 14:33:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:06.581 ************************************ 00:06:06.581 END TEST accel_wrong_workload 00:06:06.581 ************************************ 00:06:06.581 Error: writing output failed: Broken pipe 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.581 14:33:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.581 14:33:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.581 ************************************ 00:06:06.581 START TEST accel_negative_buffers 00:06:06.581 ************************************ 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.581 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.581 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:06.582 14:33:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:06.841 -x option must be non-negative. 00:06:06.841 [2024-07-25 14:33:26.881949] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:06.841 accel_perf options: 00:06:06.841 [-h help message] 00:06:06.841 [-q queue depth per core] 00:06:06.841 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.841 [-T number of threads per core 00:06:06.841 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.841 [-t time in seconds] 00:06:06.841 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.841 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:06.841 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.841 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.841 [-S for crc32c workload, use this seed value (default 0) 00:06:06.841 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.841 [-f for fill workload, use this BYTE value (default 255) 00:06:06.841 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.841 [-y verify result if this switch is on] 00:06:06.842 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.842 Can be used to spread operations across a wider range of memory. 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.842 00:06:06.842 real 0m0.033s 00:06:06.842 user 0m0.022s 00:06:06.842 sys 0m0.011s 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.842 14:33:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:06.842 ************************************ 00:06:06.842 END TEST accel_negative_buffers 00:06:06.842 ************************************ 00:06:06.842 Error: writing output failed: Broken pipe 00:06:06.842 14:33:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.842 14:33:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:06.842 14:33:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:06.842 14:33:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.842 14:33:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.842 ************************************ 00:06:06.842 START TEST accel_crc32c 00:06:06.842 ************************************ 00:06:06.842 14:33:26 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:06.842 14:33:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:06.842 [2024-07-25 14:33:26.963702] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:06.842 [2024-07-25 14:33:26.963751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159889 ] 00:06:06.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.842 [2024-07-25 14:33:27.017331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.842 [2024-07-25 14:33:27.090406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.842 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.842 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.842 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.842 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.842 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.101 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.102 14:33:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.041 14:33:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.041 00:06:08.041 real 0m1.323s 00:06:08.041 user 0m1.227s 00:06:08.041 sys 0m0.111s 00:06:08.041 14:33:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.041 14:33:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:08.041 ************************************ 00:06:08.041 END TEST accel_crc32c 00:06:08.041 ************************************ 00:06:08.041 14:33:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.041 14:33:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:08.041 14:33:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.041 14:33:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.041 14:33:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.041 ************************************ 00:06:08.041 START TEST accel_crc32c_C2 00:06:08.041 ************************************ 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.041 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.301 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.301 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:08.302 [2024-07-25 14:33:28.353912] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:08.302 [2024-07-25 14:33:28.353960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160138 ] 00:06:08.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.302 [2024-07-25 14:33:28.408765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.302 [2024-07-25 14:33:28.481690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.302 14:33:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.682 00:06:09.682 real 0m1.335s 00:06:09.682 user 0m1.236s 00:06:09.682 sys 0m0.113s 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.682 14:33:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:09.682 ************************************ 00:06:09.682 END TEST accel_crc32c_C2 00:06:09.682 ************************************ 00:06:09.682 14:33:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.682 14:33:29 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:09.682 14:33:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.682 14:33:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.682 14:33:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.682 ************************************ 00:06:09.682 START TEST accel_copy 00:06:09.682 ************************************ 00:06:09.682 14:33:29 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:09.682 14:33:29 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:09.682 [2024-07-25 14:33:29.741227] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:09.682 [2024-07-25 14:33:29.741264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160401 ] 00:06:09.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.682 [2024-07-25 14:33:29.794243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.683 [2024-07-25 14:33:29.867055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.683 14:33:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.063 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:11.064 14:33:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.064 00:06:11.064 real 0m1.325s 00:06:11.064 user 0m1.232s 00:06:11.064 sys 0m0.106s 00:06:11.064 14:33:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.064 14:33:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.064 ************************************ 00:06:11.064 END TEST accel_copy 00:06:11.064 ************************************ 00:06:11.064 14:33:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.064 14:33:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.064 14:33:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:11.064 14:33:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.064 14:33:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.064 ************************************ 00:06:11.064 START TEST accel_fill 00:06:11.064 ************************************ 00:06:11.064 14:33:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:11.064 [2024-07-25 14:33:31.125460] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:11.064 [2024-07-25 14:33:31.125509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160644 ] 00:06:11.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.064 [2024-07-25 14:33:31.179710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.064 [2024-07-25 14:33:31.252390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.064 14:33:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:12.442 14:33:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.442 00:06:12.442 real 0m1.327s 00:06:12.442 user 0m1.227s 00:06:12.442 sys 0m0.115s 00:06:12.443 14:33:32 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.443 14:33:32 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:12.443 ************************************ 00:06:12.443 END TEST accel_fill 00:06:12.443 ************************************ 00:06:12.443 14:33:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.443 14:33:32 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:12.443 14:33:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.443 14:33:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.443 14:33:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.443 ************************************ 00:06:12.443 START TEST accel_copy_crc32c 00:06:12.443 ************************************ 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:12.443 [2024-07-25 14:33:32.530073] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:12.443 [2024-07-25 14:33:32.530131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160897 ] 00:06:12.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.443 [2024-07-25 14:33:32.586373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.443 [2024-07-25 14:33:32.659496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.443 14:33:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.869 00:06:13.869 real 0m1.338s 00:06:13.869 user 0m1.237s 00:06:13.869 sys 0m0.116s 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.869 14:33:33 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:13.869 ************************************ 00:06:13.869 END TEST accel_copy_crc32c 00:06:13.869 ************************************ 00:06:13.869 14:33:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.869 14:33:33 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.869 14:33:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:13.869 14:33:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.869 14:33:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.869 ************************************ 00:06:13.869 START TEST accel_copy_crc32c_C2 00:06:13.869 ************************************ 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.869 14:33:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:13.869 [2024-07-25 14:33:33.932146] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:13.869 [2024-07-25 14:33:33.932213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161162 ] 00:06:13.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.869 [2024-07-25 14:33:33.986737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.869 [2024-07-25 14:33:34.059325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.869 14:33:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.250 00:06:15.250 real 0m1.336s 00:06:15.250 user 0m1.238s 00:06:15.250 sys 0m0.114s 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.250 14:33:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.250 ************************************ 00:06:15.250 END TEST accel_copy_crc32c_C2 00:06:15.250 ************************************ 00:06:15.250 14:33:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.250 14:33:35 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:15.250 14:33:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.250 14:33:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.250 14:33:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.250 ************************************ 00:06:15.250 START TEST accel_dualcast 00:06:15.250 ************************************ 00:06:15.250 14:33:35 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:15.250 [2024-07-25 14:33:35.329104] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:15.250 [2024-07-25 14:33:35.329154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161416 ] 00:06:15.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.250 [2024-07-25 14:33:35.382799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.250 [2024-07-25 14:33:35.455287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.250 14:33:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.627 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.628 14:33:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.628 00:06:16.628 real 0m1.334s 00:06:16.628 user 0m1.229s 00:06:16.628 sys 0m0.118s 00:06:16.628 14:33:36 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.628 14:33:36 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:16.628 ************************************ 00:06:16.628 END TEST accel_dualcast 00:06:16.628 ************************************ 00:06:16.628 14:33:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.628 14:33:36 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.628 14:33:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.628 14:33:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.628 14:33:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.628 ************************************ 00:06:16.628 START TEST accel_compare 00:06:16.628 ************************************ 00:06:16.628 14:33:36 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:16.628 [2024-07-25 14:33:36.696782] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:16.628 [2024-07-25 14:33:36.696819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161661 ] 00:06:16.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.628 [2024-07-25 14:33:36.750089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.628 [2024-07-25 14:33:36.825003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.628 14:33:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.004 14:33:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.004 14:33:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.004 14:33:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.004 14:33:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.004 14:33:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:18.005 14:33:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.005 00:06:18.005 real 0m1.325s 00:06:18.005 user 0m1.231s 00:06:18.005 sys 0m0.107s 00:06:18.005 14:33:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.005 14:33:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:18.005 ************************************ 00:06:18.005 END TEST accel_compare 00:06:18.005 ************************************ 00:06:18.005 14:33:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.005 14:33:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.005 14:33:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.005 14:33:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.005 14:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.005 ************************************ 00:06:18.005 START TEST accel_xor 00:06:18.005 ************************************ 00:06:18.005 14:33:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:18.005 [2024-07-25 14:33:38.085820] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:18.005 [2024-07-25 14:33:38.085859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161935 ] 00:06:18.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.005 [2024-07-25 14:33:38.137986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.005 [2024-07-25 14:33:38.210359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.005 14:33:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.382 00:06:19.382 real 0m1.319s 00:06:19.382 user 0m1.226s 00:06:19.382 sys 0m0.107s 00:06:19.382 14:33:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.382 14:33:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 ************************************ 00:06:19.382 END TEST accel_xor 00:06:19.382 ************************************ 00:06:19.382 14:33:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.382 14:33:39 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.382 14:33:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.382 14:33:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.382 14:33:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 ************************************ 00:06:19.382 START TEST accel_xor 00:06:19.382 ************************************ 00:06:19.382 14:33:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:19.382 [2024-07-25 14:33:39.471244] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:19.382 [2024-07-25 14:33:39.471298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162183 ] 00:06:19.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.382 [2024-07-25 14:33:39.525910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.382 [2024-07-25 14:33:39.597715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.382 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.383 14:33:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:20.762 14:33:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.762 00:06:20.762 real 0m1.336s 00:06:20.762 user 0m1.234s 00:06:20.762 sys 0m0.114s 00:06:20.762 14:33:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.762 14:33:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:20.762 ************************************ 00:06:20.762 END TEST accel_xor 00:06:20.762 ************************************ 00:06:20.762 14:33:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.762 14:33:40 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:20.762 14:33:40 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.762 14:33:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.762 14:33:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.762 ************************************ 00:06:20.762 START TEST accel_dif_verify 00:06:20.762 ************************************ 00:06:20.762 14:33:40 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:20.762 14:33:40 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:20.762 [2024-07-25 14:33:40.859695] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:20.762 [2024-07-25 14:33:40.859758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162454 ] 00:06:20.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.762 [2024-07-25 14:33:40.915309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.762 [2024-07-25 14:33:40.986545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:20.762 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.763 14:33:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:22.141 14:33:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.141 00:06:22.141 real 0m1.334s 00:06:22.141 user 0m1.241s 00:06:22.141 sys 0m0.110s 00:06:22.141 14:33:42 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.141 14:33:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.141 ************************************ 00:06:22.141 END TEST accel_dif_verify 00:06:22.141 ************************************ 00:06:22.141 14:33:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.141 14:33:42 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:22.141 14:33:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.141 14:33:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.141 14:33:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.141 ************************************ 00:06:22.141 START TEST accel_dif_generate 00:06:22.141 ************************************ 00:06:22.141 14:33:42 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:22.141 14:33:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:22.141 14:33:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:22.141 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.141 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.141 14:33:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:22.142 [2024-07-25 14:33:42.251453] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:22.142 [2024-07-25 14:33:42.251506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162705 ] 00:06:22.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.142 [2024-07-25 14:33:42.305993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.142 [2024-07-25 14:33:42.377483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.142 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:22.400 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.401 14:33:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.333 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:23.334 14:33:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.334 00:06:23.334 real 0m1.332s 00:06:23.334 user 0m1.234s 00:06:23.334 sys 0m0.112s 00:06:23.334 14:33:43 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.334 14:33:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:23.334 ************************************ 00:06:23.334 END TEST accel_dif_generate 00:06:23.334 ************************************ 00:06:23.334 14:33:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.334 14:33:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:23.334 14:33:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.334 14:33:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.334 14:33:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.334 ************************************ 00:06:23.334 START TEST accel_dif_generate_copy 00:06:23.334 ************************************ 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:23.334 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:23.592 [2024-07-25 14:33:43.637672] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:23.592 [2024-07-25 14:33:43.637726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162956 ] 00:06:23.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.592 [2024-07-25 14:33:43.692658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.592 [2024-07-25 14:33:43.764228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.592 14:33:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.967 00:06:24.967 real 0m1.334s 00:06:24.967 user 0m1.228s 00:06:24.967 sys 0m0.118s 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.967 14:33:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.967 ************************************ 00:06:24.967 END TEST accel_dif_generate_copy 00:06:24.967 ************************************ 00:06:24.967 14:33:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.967 14:33:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:24.967 14:33:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.967 14:33:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:24.967 14:33:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.967 14:33:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.967 ************************************ 00:06:24.967 START TEST accel_comp 00:06:24.967 ************************************ 00:06:24.967 14:33:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:24.967 [2024-07-25 14:33:45.030467] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:24.967 [2024-07-25 14:33:45.030518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163215 ] 00:06:24.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.967 [2024-07-25 14:33:45.084413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.967 [2024-07-25 14:33:45.155582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.967 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.968 14:33:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:26.347 14:33:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.347 00:06:26.347 real 0m1.334s 00:06:26.347 user 0m1.236s 00:06:26.347 sys 0m0.112s 00:06:26.347 14:33:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.347 14:33:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:26.347 ************************************ 00:06:26.347 END TEST accel_comp 00:06:26.347 ************************************ 00:06:26.347 14:33:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.347 14:33:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.347 14:33:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:26.348 14:33:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.348 14:33:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.348 ************************************ 00:06:26.348 START TEST accel_decomp 00:06:26.348 ************************************ 00:06:26.348 14:33:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:26.348 [2024-07-25 14:33:46.426469] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:26.348 [2024-07-25 14:33:46.426520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163461 ] 00:06:26.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.348 [2024-07-25 14:33:46.479512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.348 [2024-07-25 14:33:46.550574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.348 14:33:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.724 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.725 14:33:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.725 00:06:27.725 real 0m1.332s 00:06:27.725 user 0m1.231s 00:06:27.725 sys 0m0.116s 00:06:27.725 14:33:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.725 14:33:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:27.725 ************************************ 00:06:27.725 END TEST accel_decomp 00:06:27.725 ************************************ 00:06:27.725 14:33:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.725 14:33:47 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.725 14:33:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:27.725 14:33:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.725 14:33:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.725 ************************************ 00:06:27.725 START TEST accel_decomp_full 00:06:27.725 ************************************ 00:06:27.725 14:33:47 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:27.725 [2024-07-25 14:33:47.815957] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:27.725 [2024-07-25 14:33:47.816015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163718 ] 00:06:27.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.725 [2024-07-25 14:33:47.870893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.725 [2024-07-25 14:33:47.943232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:27.725 14:33:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.725 14:33:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.102 14:33:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.102 00:06:29.102 real 0m1.337s 00:06:29.102 user 0m1.247s 00:06:29.102 sys 0m0.105s 00:06:29.102 14:33:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.102 14:33:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:29.102 ************************************ 00:06:29.102 END TEST accel_decomp_full 00:06:29.102 ************************************ 00:06:29.102 14:33:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.102 14:33:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.102 14:33:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:29.102 14:33:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.102 14:33:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.102 ************************************ 00:06:29.102 START TEST accel_decomp_mcore 00:06:29.102 ************************************ 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:29.102 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:29.102 [2024-07-25 14:33:49.221639] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:29.102 [2024-07-25 14:33:49.221705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163974 ] 00:06:29.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.102 [2024-07-25 14:33:49.275992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.102 [2024-07-25 14:33:49.349579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.102 [2024-07-25 14:33:49.349679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.102 [2024-07-25 14:33:49.349772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.103 [2024-07-25 14:33:49.349774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.103 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.103 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.103 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.103 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.362 14:33:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.301 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.301 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.302 00:06:30.302 real 0m1.347s 00:06:30.302 user 0m4.574s 00:06:30.302 sys 0m0.113s 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.302 14:33:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:30.302 ************************************ 00:06:30.302 END TEST accel_decomp_mcore 00:06:30.302 ************************************ 00:06:30.302 14:33:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.302 14:33:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.302 14:33:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:30.302 14:33:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.302 14:33:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.562 ************************************ 00:06:30.562 START TEST accel_decomp_full_mcore 00:06:30.562 ************************************ 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:30.562 [2024-07-25 14:33:50.629184] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:30.562 [2024-07-25 14:33:50.629232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164228 ] 00:06:30.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.562 [2024-07-25 14:33:50.682653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.562 [2024-07-25 14:33:50.756860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.562 [2024-07-25 14:33:50.756959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.562 [2024-07-25 14:33:50.757060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.562 [2024-07-25 14:33:50.757066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.562 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.563 14:33:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.942 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.943 00:06:31.943 real 0m1.353s 00:06:31.943 user 0m4.598s 00:06:31.943 sys 0m0.123s 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.943 14:33:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:31.943 ************************************ 00:06:31.943 END TEST accel_decomp_full_mcore 00:06:31.943 ************************************ 00:06:31.943 14:33:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.943 14:33:51 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.943 14:33:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.943 14:33:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.943 14:33:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.943 ************************************ 00:06:31.943 START TEST accel_decomp_mthread 00:06:31.943 ************************************ 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.943 [2024-07-25 14:33:52.042524] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:31.943 [2024-07-25 14:33:52.042572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164483 ] 00:06:31.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.943 [2024-07-25 14:33:52.096269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.943 [2024-07-25 14:33:52.167614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.943 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.944 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.944 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.944 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.944 14:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.323 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.323 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.323 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.324 00:06:33.324 real 0m1.339s 00:06:33.324 user 0m1.245s 00:06:33.324 sys 0m0.109s 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.324 14:33:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:33.324 ************************************ 00:06:33.324 END TEST accel_decomp_mthread 00:06:33.324 ************************************ 00:06:33.324 14:33:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.324 14:33:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.324 14:33:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:33.324 14:33:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.324 14:33:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.324 ************************************ 00:06:33.324 START TEST accel_decomp_full_mthread 00:06:33.324 ************************************ 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:33.324 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:33.324 [2024-07-25 14:33:53.446363] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:33.324 [2024-07-25 14:33:53.446432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164733 ] 00:06:33.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.324 [2024-07-25 14:33:53.501629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.324 [2024-07-25 14:33:53.573507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:33.636 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.637 14:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.598 14:33:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.598 00:06:34.598 real 0m1.363s 00:06:34.598 user 0m1.265s 00:06:34.598 sys 0m0.112s 00:06:34.599 14:33:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.599 14:33:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:34.599 ************************************ 00:06:34.599 END TEST accel_decomp_full_mthread 00:06:34.599 ************************************ 00:06:34.599 14:33:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.599 14:33:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:34.599 14:33:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.599 14:33:54 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:34.599 14:33:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:34.599 14:33:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.599 14:33:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.599 14:33:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.599 14:33:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.599 14:33:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.599 14:33:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.599 14:33:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.599 14:33:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:34.599 14:33:54 accel -- accel/accel.sh@41 -- # jq -r . 00:06:34.599 ************************************ 00:06:34.599 START TEST accel_dif_functional_tests 00:06:34.599 ************************************ 00:06:34.599 14:33:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.599 [2024-07-25 14:33:54.883277] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:34.599 [2024-07-25 14:33:54.883308] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164981 ] 00:06:34.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.859 [2024-07-25 14:33:54.935465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.859 [2024-07-25 14:33:55.008686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.859 [2024-07-25 14:33:55.008781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.859 [2024-07-25 14:33:55.008783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.859 00:06:34.859 00:06:34.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.859 http://cunit.sourceforge.net/ 00:06:34.859 00:06:34.859 00:06:34.859 Suite: accel_dif 00:06:34.859 Test: verify: DIF generated, GUARD check ...passed 00:06:34.859 Test: verify: DIF generated, APPTAG check ...passed 00:06:34.859 Test: verify: DIF generated, REFTAG check ...passed 00:06:34.859 Test: verify: DIF not generated, GUARD check ...[2024-07-25 14:33:55.076805] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.859 passed 00:06:34.859 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 14:33:55.076851] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.859 passed 00:06:34.859 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 14:33:55.076886] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.859 passed 00:06:34.859 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:34.859 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 14:33:55.076931] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:34.859 passed 00:06:34.859 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:34.859 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:34.859 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:34.859 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 14:33:55.077032] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:34.859 passed 00:06:34.859 Test: verify copy: DIF generated, GUARD check ...passed 00:06:34.859 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:34.859 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:34.859 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 14:33:55.077146] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.859 passed 00:06:34.859 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 14:33:55.077168] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.859 passed 00:06:34.859 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 14:33:55.077192] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.859 passed 00:06:34.859 Test: generate copy: DIF generated, GUARD check ...passed 00:06:34.859 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:34.859 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:34.859 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:34.859 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:34.859 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:34.859 Test: generate copy: iovecs-len validate ...[2024-07-25 14:33:55.077355] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.859 passed 00:06:34.859 Test: generate copy: buffer alignment validate ...passed 00:06:34.859 00:06:34.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.859 suites 1 1 n/a 0 0 00:06:34.859 tests 26 26 26 0 0 00:06:34.859 asserts 115 115 115 0 n/a 00:06:34.859 00:06:34.859 Elapsed time = 0.002 seconds 00:06:35.119 00:06:35.119 real 0m0.401s 00:06:35.119 user 0m0.610s 00:06:35.119 sys 0m0.143s 00:06:35.119 14:33:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.119 14:33:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:35.119 ************************************ 00:06:35.119 END TEST accel_dif_functional_tests 00:06:35.119 ************************************ 00:06:35.119 14:33:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.119 00:06:35.119 real 0m30.719s 00:06:35.119 user 0m34.706s 00:06:35.119 sys 0m4.052s 00:06:35.119 14:33:55 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.119 14:33:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.119 ************************************ 00:06:35.119 END TEST accel 00:06:35.119 ************************************ 00:06:35.119 14:33:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.119 14:33:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:35.119 14:33:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.119 14:33:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.119 14:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:35.119 ************************************ 00:06:35.119 START TEST accel_rpc 00:06:35.119 ************************************ 00:06:35.119 14:33:55 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:35.379 * Looking for test storage... 00:06:35.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:35.379 14:33:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.379 14:33:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2165051 00:06:35.379 14:33:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2165051 00:06:35.379 14:33:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2165051 ']' 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.379 14:33:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.379 [2024-07-25 14:33:55.485200] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:35.379 [2024-07-25 14:33:55.485257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165051 ] 00:06:35.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.379 [2024-07-25 14:33:55.540394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.379 [2024-07-25 14:33:55.614795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.316 14:33:56 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.316 14:33:56 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.316 14:33:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:36.316 14:33:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:36.316 14:33:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:36.316 14:33:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:36.316 14:33:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:36.316 14:33:56 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.316 14:33:56 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.316 14:33:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 ************************************ 00:06:36.316 START TEST accel_assign_opcode 00:06:36.316 ************************************ 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 [2024-07-25 14:33:56.328896] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 [2024-07-25 14:33:56.336899] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.317 software 00:06:36.317 00:06:36.317 real 0m0.233s 00:06:36.317 user 0m0.043s 00:06:36.317 sys 0m0.007s 00:06:36.317 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.317 14:33:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:36.317 ************************************ 00:06:36.317 END TEST accel_assign_opcode 00:06:36.317 ************************************ 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:36.317 14:33:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2165051 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2165051 ']' 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2165051 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.317 14:33:56 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2165051 00:06:36.576 14:33:56 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.576 14:33:56 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.576 14:33:56 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2165051' 00:06:36.576 killing process with pid 2165051 00:06:36.576 14:33:56 accel_rpc -- common/autotest_common.sh@967 -- # kill 2165051 00:06:36.576 14:33:56 accel_rpc -- common/autotest_common.sh@972 -- # wait 2165051 00:06:36.836 00:06:36.836 real 0m1.597s 00:06:36.836 user 0m1.672s 00:06:36.836 sys 0m0.425s 00:06:36.836 14:33:56 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.836 14:33:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.836 ************************************ 00:06:36.836 END TEST accel_rpc 00:06:36.836 ************************************ 00:06:36.836 14:33:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.836 14:33:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.836 14:33:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.836 14:33:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.836 14:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:36.836 ************************************ 00:06:36.836 START TEST app_cmdline 00:06:36.836 ************************************ 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.836 * Looking for test storage... 00:06:36.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.836 14:33:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.836 14:33:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2165365 00:06:36.836 14:33:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2165365 00:06:36.836 14:33:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2165365 ']' 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.836 14:33:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.096 [2024-07-25 14:33:57.143401] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:06:37.096 [2024-07-25 14:33:57.143451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165365 ] 00:06:37.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.096 [2024-07-25 14:33:57.199831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.096 [2024-07-25 14:33:57.272728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.664 14:33:57 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.664 14:33:57 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:37.664 14:33:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.924 { 00:06:37.924 "version": "SPDK v24.09-pre git sha1 e7b600835", 00:06:37.924 "fields": { 00:06:37.924 "major": 24, 00:06:37.924 "minor": 9, 00:06:37.924 "patch": 0, 00:06:37.924 "suffix": "-pre", 00:06:37.924 "commit": "e7b600835" 00:06:37.924 } 00:06:37.924 } 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.924 14:33:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.924 14:33:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.184 request: 00:06:38.184 { 00:06:38.184 "method": "env_dpdk_get_mem_stats", 00:06:38.184 "req_id": 1 00:06:38.184 } 00:06:38.184 Got JSON-RPC error response 00:06:38.184 response: 00:06:38.184 { 00:06:38.184 "code": -32601, 00:06:38.184 "message": "Method not found" 00:06:38.184 } 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.184 14:33:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2165365 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2165365 ']' 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2165365 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2165365 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2165365' 00:06:38.184 killing process with pid 2165365 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@967 -- # kill 2165365 00:06:38.184 14:33:58 app_cmdline -- common/autotest_common.sh@972 -- # wait 2165365 00:06:38.445 00:06:38.445 real 0m1.681s 00:06:38.445 user 0m2.002s 00:06:38.445 sys 0m0.432s 00:06:38.445 14:33:58 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.445 14:33:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.445 ************************************ 00:06:38.445 END TEST app_cmdline 00:06:38.445 ************************************ 00:06:38.445 14:33:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.445 14:33:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:38.445 14:33:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.445 14:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.445 14:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:38.705 ************************************ 00:06:38.705 START TEST version 00:06:38.705 ************************************ 00:06:38.705 14:33:58 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:38.705 * Looking for test storage... 00:06:38.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.705 14:33:58 version -- app/version.sh@17 -- # get_header_version major 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # cut -f2 00:06:38.705 14:33:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.705 14:33:58 version -- app/version.sh@17 -- # major=24 00:06:38.705 14:33:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.705 14:33:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # cut -f2 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.705 14:33:58 version -- app/version.sh@18 -- # minor=9 00:06:38.705 14:33:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.705 14:33:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # cut -f2 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.705 14:33:58 version -- app/version.sh@19 -- # patch=0 00:06:38.705 14:33:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.705 14:33:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # cut -f2 00:06:38.705 14:33:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.705 14:33:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.705 14:33:58 version -- app/version.sh@22 -- # version=24.9 00:06:38.705 14:33:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.705 14:33:58 version -- app/version.sh@28 -- # version=24.9rc0 00:06:38.705 14:33:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:38.705 14:33:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.705 14:33:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:38.705 14:33:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:38.705 00:06:38.705 real 0m0.160s 00:06:38.705 user 0m0.080s 00:06:38.705 sys 0m0.112s 00:06:38.705 14:33:58 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.705 14:33:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.705 ************************************ 00:06:38.705 END TEST version 00:06:38.705 ************************************ 00:06:38.705 14:33:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.705 14:33:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@198 -- # uname -s 00:06:38.705 14:33:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:38.705 14:33:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:38.705 14:33:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:38.705 14:33:58 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:38.705 14:33:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.705 14:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:38.705 14:33:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:38.705 14:33:58 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:38.705 14:33:58 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.705 14:33:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.705 14:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.705 14:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:38.965 ************************************ 00:06:38.965 START TEST nvmf_tcp 00:06:38.965 ************************************ 00:06:38.965 14:33:59 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.965 * Looking for test storage... 00:06:38.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.965 14:33:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.966 14:33:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.966 14:33:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.966 14:33:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.966 14:33:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:38.966 14:33:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:38.966 14:33:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.966 14:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:38.966 14:33:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.966 14:33:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.966 14:33:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.966 14:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 ************************************ 00:06:38.966 START TEST nvmf_example 00:06:38.966 ************************************ 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.966 * Looking for test storage... 00:06:38.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.966 14:33:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.545 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.546 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.546 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:45.546 00:06:45.546 --- 10.0.0.2 ping statistics --- 00:06:45.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.546 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:06:45.546 00:06:45.546 --- 10.0.0.1 ping statistics --- 00:06:45.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.546 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2168973 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2168973 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2168973 ']' 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.546 14:34:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.546 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:45.547 14:34:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.797 Initializing NVMe Controllers 00:06:55.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:55.797 Initialization complete. Launching workers. 00:06:55.797 ======================================================== 00:06:55.797 Latency(us) 00:06:55.797 Device Information : IOPS MiB/s Average min max 00:06:55.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12945.69 50.57 4943.55 719.86 16389.89 00:06:55.797 ======================================================== 00:06:55.797 Total : 12945.69 50.57 4943.55 719.86 16389.89 00:06:55.797 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.797 14:34:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.797 rmmod nvme_tcp 00:06:55.797 rmmod nvme_fabrics 00:06:55.797 rmmod nvme_keyring 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2168973 ']' 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2168973 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2168973 ']' 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2168973 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.797 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2168973 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2168973' 00:06:56.057 killing process with pid 2168973 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2168973 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2168973 00:06:56.057 nvmf threads initialize successfully 00:06:56.057 bdev subsystem init successfully 00:06:56.057 created a nvmf target service 00:06:56.057 create targets's poll groups done 00:06:56.057 all subsystems of target started 00:06:56.057 nvmf target is running 00:06:56.057 all subsystems of target stopped 00:06:56.057 destroy targets's poll groups done 00:06:56.057 destroyed the nvmf target service 00:06:56.057 bdev subsystem finish successfully 00:06:56.057 nvmf threads destroy successfully 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.057 14:34:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.601 00:06:58.601 real 0m19.230s 00:06:58.601 user 0m45.814s 00:06:58.601 sys 0m5.446s 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.601 14:34:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.601 ************************************ 00:06:58.601 END TEST nvmf_example 00:06:58.601 ************************************ 00:06:58.601 14:34:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.601 14:34:18 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.601 14:34:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.601 14:34:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.601 14:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.601 ************************************ 00:06:58.601 START TEST nvmf_filesystem 00:06:58.601 ************************************ 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.601 * Looking for test storage... 00:06:58.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.601 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:58.602 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:58.602 #define SPDK_CONFIG_H 00:06:58.602 #define SPDK_CONFIG_APPS 1 00:06:58.602 #define SPDK_CONFIG_ARCH native 00:06:58.602 #undef SPDK_CONFIG_ASAN 00:06:58.602 #undef SPDK_CONFIG_AVAHI 00:06:58.602 #undef SPDK_CONFIG_CET 00:06:58.602 #define SPDK_CONFIG_COVERAGE 1 00:06:58.602 #define SPDK_CONFIG_CROSS_PREFIX 00:06:58.602 #undef SPDK_CONFIG_CRYPTO 00:06:58.602 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:58.602 #undef SPDK_CONFIG_CUSTOMOCF 00:06:58.602 #undef SPDK_CONFIG_DAOS 00:06:58.602 #define SPDK_CONFIG_DAOS_DIR 00:06:58.602 #define SPDK_CONFIG_DEBUG 1 00:06:58.602 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:58.602 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.602 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:58.602 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:58.602 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:58.602 #undef SPDK_CONFIG_DPDK_UADK 00:06:58.602 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.602 #define SPDK_CONFIG_EXAMPLES 1 00:06:58.602 #undef SPDK_CONFIG_FC 00:06:58.602 #define SPDK_CONFIG_FC_PATH 00:06:58.602 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:58.602 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:58.602 #undef SPDK_CONFIG_FUSE 00:06:58.602 #undef SPDK_CONFIG_FUZZER 00:06:58.602 #define SPDK_CONFIG_FUZZER_LIB 00:06:58.602 #undef SPDK_CONFIG_GOLANG 00:06:58.602 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:58.602 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:58.602 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:58.602 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:58.602 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:58.602 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:58.602 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:58.602 #define SPDK_CONFIG_IDXD 1 00:06:58.602 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:58.603 #undef SPDK_CONFIG_IPSEC_MB 00:06:58.603 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:58.603 #define SPDK_CONFIG_ISAL 1 00:06:58.603 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:58.603 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:58.603 #define SPDK_CONFIG_LIBDIR 00:06:58.603 #undef SPDK_CONFIG_LTO 00:06:58.603 #define SPDK_CONFIG_MAX_LCORES 128 00:06:58.603 #define SPDK_CONFIG_NVME_CUSE 1 00:06:58.603 #undef SPDK_CONFIG_OCF 00:06:58.603 #define SPDK_CONFIG_OCF_PATH 00:06:58.603 #define SPDK_CONFIG_OPENSSL_PATH 00:06:58.603 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:58.603 #define SPDK_CONFIG_PGO_DIR 00:06:58.603 #undef SPDK_CONFIG_PGO_USE 00:06:58.603 #define SPDK_CONFIG_PREFIX /usr/local 00:06:58.603 #undef SPDK_CONFIG_RAID5F 00:06:58.603 #undef SPDK_CONFIG_RBD 00:06:58.603 #define SPDK_CONFIG_RDMA 1 00:06:58.603 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:58.603 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:58.603 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:58.603 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:58.603 #define SPDK_CONFIG_SHARED 1 00:06:58.603 #undef SPDK_CONFIG_SMA 00:06:58.603 #define SPDK_CONFIG_TESTS 1 00:06:58.603 #undef SPDK_CONFIG_TSAN 00:06:58.603 #define SPDK_CONFIG_UBLK 1 00:06:58.603 #define SPDK_CONFIG_UBSAN 1 00:06:58.603 #undef SPDK_CONFIG_UNIT_TESTS 00:06:58.603 #undef SPDK_CONFIG_URING 00:06:58.603 #define SPDK_CONFIG_URING_PATH 00:06:58.603 #undef SPDK_CONFIG_URING_ZNS 00:06:58.603 #undef SPDK_CONFIG_USDT 00:06:58.603 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:58.603 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:58.603 #define SPDK_CONFIG_VFIO_USER 1 00:06:58.603 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:58.603 #define SPDK_CONFIG_VHOST 1 00:06:58.603 #define SPDK_CONFIG_VIRTIO 1 00:06:58.603 #undef SPDK_CONFIG_VTUNE 00:06:58.603 #define SPDK_CONFIG_VTUNE_DIR 00:06:58.603 #define SPDK_CONFIG_WERROR 1 00:06:58.603 #define SPDK_CONFIG_WPDK_DIR 00:06:58.603 #undef SPDK_CONFIG_XNVME 00:06:58.603 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:58.603 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.604 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2171386 ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2171386 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.aRxrtw 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aRxrtw/tests/target /tmp/spdk.aRxrtw 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=185211953152 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974283264 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10762330112 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931505664 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185477632 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194857472 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9379840 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97984253952 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2887680 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:58.605 * Looking for test storage... 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=185211953152 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12976922624 00:06:58.605 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.606 14:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.882 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:03.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:03.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:03.883 Found net devices under 0000:86:00.0: cvl_0_0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:03.883 Found net devices under 0000:86:00.1: cvl_0_1 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:03.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:07:03.883 00:07:03.883 --- 10.0.0.2 ping statistics --- 00:07:03.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.883 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:07:03.883 00:07:03.883 --- 10.0.0.1 ping statistics --- 00:07:03.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.883 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 ************************************ 00:07:03.883 START TEST nvmf_filesystem_no_in_capsule 00:07:03.883 ************************************ 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2174356 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2174356 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2174356 ']' 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.883 14:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 [2024-07-25 14:34:23.881741] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:07:03.883 [2024-07-25 14:34:23.881781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.883 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.883 [2024-07-25 14:34:23.940144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.883 [2024-07-25 14:34:24.021570] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.883 [2024-07-25 14:34:24.021607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.884 [2024-07-25 14:34:24.021614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.884 [2024-07-25 14:34:24.021620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.884 [2024-07-25 14:34:24.021624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.884 [2024-07-25 14:34:24.021739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.884 [2024-07-25 14:34:24.021854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.884 [2024-07-25 14:34:24.021917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.884 [2024-07-25 14:34:24.021918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.452 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 [2024-07-25 14:34:24.751945] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 [2024-07-25 14:34:24.901814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:04.711 { 00:07:04.711 "name": "Malloc1", 00:07:04.711 "aliases": [ 00:07:04.711 "4fc34744-6921-4c34-8b05-69b95cfe0cb7" 00:07:04.711 ], 00:07:04.711 "product_name": "Malloc disk", 00:07:04.711 "block_size": 512, 00:07:04.711 "num_blocks": 1048576, 00:07:04.711 "uuid": "4fc34744-6921-4c34-8b05-69b95cfe0cb7", 00:07:04.711 "assigned_rate_limits": { 00:07:04.711 "rw_ios_per_sec": 0, 00:07:04.711 "rw_mbytes_per_sec": 0, 00:07:04.711 "r_mbytes_per_sec": 0, 00:07:04.711 "w_mbytes_per_sec": 0 00:07:04.711 }, 00:07:04.711 "claimed": true, 00:07:04.711 "claim_type": "exclusive_write", 00:07:04.711 "zoned": false, 00:07:04.711 "supported_io_types": { 00:07:04.711 "read": true, 00:07:04.711 "write": true, 00:07:04.711 "unmap": true, 00:07:04.711 "flush": true, 00:07:04.711 "reset": true, 00:07:04.711 "nvme_admin": false, 00:07:04.711 "nvme_io": false, 00:07:04.711 "nvme_io_md": false, 00:07:04.711 "write_zeroes": true, 00:07:04.711 "zcopy": true, 00:07:04.711 "get_zone_info": false, 00:07:04.711 "zone_management": false, 00:07:04.711 "zone_append": false, 00:07:04.711 "compare": false, 00:07:04.711 "compare_and_write": false, 00:07:04.711 "abort": true, 00:07:04.711 "seek_hole": false, 00:07:04.711 "seek_data": false, 00:07:04.711 "copy": true, 00:07:04.711 "nvme_iov_md": false 00:07:04.711 }, 00:07:04.711 "memory_domains": [ 00:07:04.711 { 00:07:04.711 "dma_device_id": "system", 00:07:04.711 "dma_device_type": 1 00:07:04.711 }, 00:07:04.711 { 00:07:04.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.711 "dma_device_type": 2 00:07:04.711 } 00:07:04.711 ], 00:07:04.711 "driver_specific": {} 00:07:04.711 } 00:07:04.711 ]' 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:04.711 14:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:05.015 14:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:05.015 14:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:05.015 14:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:05.015 14:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:05.015 14:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.975 14:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.975 14:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.975 14:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.975 14:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.975 14:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:08.514 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:08.774 14:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.713 ************************************ 00:07:09.713 START TEST filesystem_ext4 00:07:09.713 ************************************ 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:09.713 14:34:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:09.713 mke2fs 1.46.5 (30-Dec-2021) 00:07:09.972 Discarding device blocks: 0/522240 done 00:07:09.973 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:09.973 Filesystem UUID: fd3e8ef8-cf33-4be6-8712-1b19ea926fdc 00:07:09.973 Superblock backups stored on blocks: 00:07:09.973 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:09.973 00:07:09.973 Allocating group tables: 0/64 done 00:07:09.973 Writing inode tables: 0/64 done 00:07:10.541 Creating journal (8192 blocks): done 00:07:11.365 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:11.365 00:07:11.365 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:11.365 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2174356 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.624 00:07:11.624 real 0m1.886s 00:07:11.624 user 0m0.019s 00:07:11.624 sys 0m0.049s 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:11.624 ************************************ 00:07:11.624 END TEST filesystem_ext4 00:07:11.624 ************************************ 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.624 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.884 ************************************ 00:07:11.884 START TEST filesystem_btrfs 00:07:11.884 ************************************ 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:11.884 14:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:12.144 btrfs-progs v6.6.2 00:07:12.144 See https://btrfs.readthedocs.io for more information. 00:07:12.144 00:07:12.144 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:12.144 NOTE: several default settings have changed in version 5.15, please make sure 00:07:12.144 this does not affect your deployments: 00:07:12.144 - DUP for metadata (-m dup) 00:07:12.144 - enabled no-holes (-O no-holes) 00:07:12.144 - enabled free-space-tree (-R free-space-tree) 00:07:12.144 00:07:12.144 Label: (null) 00:07:12.144 UUID: e5f5e947-8513-4e0e-9b32-01bed570adaf 00:07:12.144 Node size: 16384 00:07:12.144 Sector size: 4096 00:07:12.144 Filesystem size: 510.00MiB 00:07:12.144 Block group profiles: 00:07:12.144 Data: single 8.00MiB 00:07:12.144 Metadata: DUP 32.00MiB 00:07:12.144 System: DUP 8.00MiB 00:07:12.144 SSD detected: yes 00:07:12.144 Zoned device: no 00:07:12.144 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:12.144 Runtime features: free-space-tree 00:07:12.144 Checksum: crc32c 00:07:12.144 Number of devices: 1 00:07:12.144 Devices: 00:07:12.144 ID SIZE PATH 00:07:12.144 1 510.00MiB /dev/nvme0n1p1 00:07:12.144 00:07:12.144 14:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:12.144 14:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2174356 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.084 00:07:13.084 real 0m1.298s 00:07:13.084 user 0m0.029s 00:07:13.084 sys 0m0.054s 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:13.084 ************************************ 00:07:13.084 END TEST filesystem_btrfs 00:07:13.084 ************************************ 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.084 ************************************ 00:07:13.084 START TEST filesystem_xfs 00:07:13.084 ************************************ 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:13.084 14:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.344 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.344 = sectsz=512 attr=2, projid32bit=1 00:07:13.344 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.344 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.344 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.344 = sunit=0 swidth=0 blks 00:07:13.344 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.344 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.344 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.344 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:13.913 Discarding blocks...Done. 00:07:13.913 14:34:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:13.913 14:34:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2174356 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.453 00:07:16.453 real 0m3.172s 00:07:16.453 user 0m0.023s 00:07:16.453 sys 0m0.050s 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.453 ************************************ 00:07:16.453 END TEST filesystem_xfs 00:07:16.453 ************************************ 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:16.453 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2174356 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2174356 ']' 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2174356 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2174356 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2174356' 00:07:16.714 killing process with pid 2174356 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2174356 00:07:16.714 14:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2174356 00:07:16.974 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:16.974 00:07:16.974 real 0m13.424s 00:07:16.974 user 0m52.819s 00:07:16.974 sys 0m1.102s 00:07:16.974 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.974 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.974 ************************************ 00:07:16.974 END TEST nvmf_filesystem_no_in_capsule 00:07:16.974 ************************************ 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.234 ************************************ 00:07:17.234 START TEST nvmf_filesystem_in_capsule 00:07:17.234 ************************************ 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2176720 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2176720 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2176720 ']' 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.234 14:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.234 [2024-07-25 14:34:37.389998] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:07:17.234 [2024-07-25 14:34:37.390037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.234 [2024-07-25 14:34:37.448687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.494 [2024-07-25 14:34:37.530069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.494 [2024-07-25 14:34:37.530102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.494 [2024-07-25 14:34:37.530110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.494 [2024-07-25 14:34:37.530115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.494 [2024-07-25 14:34:37.530121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.494 [2024-07-25 14:34:37.530163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.494 [2024-07-25 14:34:37.530240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.494 [2024-07-25 14:34:37.530324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.494 [2024-07-25 14:34:37.530325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 [2024-07-25 14:34:38.244095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.063 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 Malloc1 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 [2024-07-25 14:34:38.397512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:18.323 { 00:07:18.323 "name": "Malloc1", 00:07:18.323 "aliases": [ 00:07:18.323 "3527f109-49e0-4de6-8b32-db63b5deb871" 00:07:18.323 ], 00:07:18.323 "product_name": "Malloc disk", 00:07:18.323 "block_size": 512, 00:07:18.323 "num_blocks": 1048576, 00:07:18.323 "uuid": "3527f109-49e0-4de6-8b32-db63b5deb871", 00:07:18.323 "assigned_rate_limits": { 00:07:18.323 "rw_ios_per_sec": 0, 00:07:18.323 "rw_mbytes_per_sec": 0, 00:07:18.323 "r_mbytes_per_sec": 0, 00:07:18.323 "w_mbytes_per_sec": 0 00:07:18.323 }, 00:07:18.323 "claimed": true, 00:07:18.323 "claim_type": "exclusive_write", 00:07:18.323 "zoned": false, 00:07:18.323 "supported_io_types": { 00:07:18.323 "read": true, 00:07:18.323 "write": true, 00:07:18.323 "unmap": true, 00:07:18.323 "flush": true, 00:07:18.323 "reset": true, 00:07:18.323 "nvme_admin": false, 00:07:18.323 "nvme_io": false, 00:07:18.323 "nvme_io_md": false, 00:07:18.323 "write_zeroes": true, 00:07:18.323 "zcopy": true, 00:07:18.323 "get_zone_info": false, 00:07:18.323 "zone_management": false, 00:07:18.323 "zone_append": false, 00:07:18.323 "compare": false, 00:07:18.323 "compare_and_write": false, 00:07:18.323 "abort": true, 00:07:18.323 "seek_hole": false, 00:07:18.323 "seek_data": false, 00:07:18.323 "copy": true, 00:07:18.323 "nvme_iov_md": false 00:07:18.323 }, 00:07:18.323 "memory_domains": [ 00:07:18.323 { 00:07:18.323 "dma_device_id": "system", 00:07:18.323 "dma_device_type": 1 00:07:18.323 }, 00:07:18.323 { 00:07:18.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.323 "dma_device_type": 2 00:07:18.323 } 00:07:18.323 ], 00:07:18.323 "driver_specific": {} 00:07:18.323 } 00:07:18.323 ]' 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:18.323 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:18.324 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:18.324 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:18.324 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:18.324 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.324 14:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.704 14:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.704 14:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:19.704 14:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.704 14:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:19.704 14:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.613 14:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:21.872 14:34:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:21.873 14:34:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.262 ************************************ 00:07:23.262 START TEST filesystem_in_capsule_ext4 00:07:23.262 ************************************ 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:23.262 14:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.262 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.262 Discarding device blocks: 0/522240 done 00:07:23.262 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.262 Filesystem UUID: c704629c-e49a-471e-bf11-9553803cf94f 00:07:23.262 Superblock backups stored on blocks: 00:07:23.262 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.262 00:07:23.262 Allocating group tables: 0/64 done 00:07:23.262 Writing inode tables: 0/64 done 00:07:23.262 Creating journal (8192 blocks): done 00:07:24.351 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:24.351 00:07:24.351 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:24.351 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2176720 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.352 00:07:24.352 real 0m1.473s 00:07:24.352 user 0m0.022s 00:07:24.352 sys 0m0.046s 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.352 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:24.352 ************************************ 00:07:24.352 END TEST filesystem_in_capsule_ext4 00:07:24.352 ************************************ 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.612 ************************************ 00:07:24.612 START TEST filesystem_in_capsule_btrfs 00:07:24.612 ************************************ 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:24.612 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:24.872 btrfs-progs v6.6.2 00:07:24.872 See https://btrfs.readthedocs.io for more information. 00:07:24.872 00:07:24.872 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:24.872 NOTE: several default settings have changed in version 5.15, please make sure 00:07:24.872 this does not affect your deployments: 00:07:24.872 - DUP for metadata (-m dup) 00:07:24.872 - enabled no-holes (-O no-holes) 00:07:24.872 - enabled free-space-tree (-R free-space-tree) 00:07:24.872 00:07:24.872 Label: (null) 00:07:24.872 UUID: 8db95183-4c85-42c1-8895-4aa86c84394c 00:07:24.872 Node size: 16384 00:07:24.872 Sector size: 4096 00:07:24.872 Filesystem size: 510.00MiB 00:07:24.872 Block group profiles: 00:07:24.872 Data: single 8.00MiB 00:07:24.872 Metadata: DUP 32.00MiB 00:07:24.872 System: DUP 8.00MiB 00:07:24.872 SSD detected: yes 00:07:24.872 Zoned device: no 00:07:24.872 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:24.872 Runtime features: free-space-tree 00:07:24.872 Checksum: crc32c 00:07:24.872 Number of devices: 1 00:07:24.872 Devices: 00:07:24.872 ID SIZE PATH 00:07:24.872 1 510.00MiB /dev/nvme0n1p1 00:07:24.872 00:07:24.872 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:24.872 14:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2176720 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.849 00:07:25.849 real 0m1.265s 00:07:25.849 user 0m0.017s 00:07:25.849 sys 0m0.064s 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 ************************************ 00:07:25.849 END TEST filesystem_in_capsule_btrfs 00:07:25.849 ************************************ 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.849 14:34:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 ************************************ 00:07:25.849 START TEST filesystem_in_capsule_xfs 00:07:25.849 ************************************ 00:07:25.849 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:25.849 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:25.850 14:34:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:25.850 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:25.850 = sectsz=512 attr=2, projid32bit=1 00:07:25.850 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:25.850 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:25.850 data = bsize=4096 blocks=130560, imaxpct=25 00:07:25.850 = sunit=0 swidth=0 blks 00:07:25.850 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:25.850 log =internal log bsize=4096 blocks=16384, version=2 00:07:25.850 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:25.850 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:26.787 Discarding blocks...Done. 00:07:26.787 14:34:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:26.787 14:34:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2176720 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.326 00:07:29.326 real 0m3.349s 00:07:29.326 user 0m0.016s 00:07:29.326 sys 0m0.056s 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.326 ************************************ 00:07:29.326 END TEST filesystem_in_capsule_xfs 00:07:29.326 ************************************ 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.326 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2176720 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2176720 ']' 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2176720 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2176720 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2176720' 00:07:29.586 killing process with pid 2176720 00:07:29.586 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2176720 00:07:29.587 14:34:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2176720 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:30.156 00:07:30.156 real 0m12.838s 00:07:30.156 user 0m50.393s 00:07:30.156 sys 0m1.108s 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.156 ************************************ 00:07:30.156 END TEST nvmf_filesystem_in_capsule 00:07:30.156 ************************************ 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.156 rmmod nvme_tcp 00:07:30.156 rmmod nvme_fabrics 00:07:30.156 rmmod nvme_keyring 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.156 14:34:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.064 14:34:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.064 00:07:32.064 real 0m33.900s 00:07:32.064 user 1m44.746s 00:07:32.064 sys 0m6.272s 00:07:32.064 14:34:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.064 14:34:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.065 ************************************ 00:07:32.065 END TEST nvmf_filesystem 00:07:32.065 ************************************ 00:07:32.325 14:34:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:32.325 14:34:52 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:32.325 14:34:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.325 14:34:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.325 14:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.325 ************************************ 00:07:32.325 START TEST nvmf_target_discovery 00:07:32.325 ************************************ 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:32.325 * Looking for test storage... 00:07:32.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.325 14:34:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.897 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.898 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.898 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:38.898 00:07:38.898 --- 10.0.0.2 ping statistics --- 00:07:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.898 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:07:38.898 00:07:38.898 --- 10.0.0.1 ping statistics --- 00:07:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.898 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2182534 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2182534 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2182534 ']' 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.898 14:34:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:38.898 [2024-07-25 14:34:58.378548] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:07:38.898 [2024-07-25 14:34:58.378596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.898 [2024-07-25 14:34:58.436535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.898 [2024-07-25 14:34:58.512317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.898 [2024-07-25 14:34:58.512351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.898 [2024-07-25 14:34:58.512359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.898 [2024-07-25 14:34:58.512366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.898 [2024-07-25 14:34:58.512371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.898 [2024-07-25 14:34:58.512419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.898 [2024-07-25 14:34:58.512514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.898 [2024-07-25 14:34:58.512576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.898 [2024-07-25 14:34:58.512577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.898 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.898 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:38.898 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.898 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.898 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 [2024-07-25 14:34:59.228933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 Null1 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 [2024-07-25 14:34:59.274398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 Null2 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 Null3 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 Null4 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.159 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:39.420 00:07:39.420 Discovery Log Number of Records 6, Generation counter 6 00:07:39.420 =====Discovery Log Entry 0====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: current discovery subsystem 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4420 00:07:39.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: explicit discovery connections, duplicate discovery information 00:07:39.420 sectype: none 00:07:39.420 =====Discovery Log Entry 1====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: nvme subsystem 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4420 00:07:39.420 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: none 00:07:39.420 sectype: none 00:07:39.420 =====Discovery Log Entry 2====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: nvme subsystem 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4420 00:07:39.420 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: none 00:07:39.420 sectype: none 00:07:39.420 =====Discovery Log Entry 3====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: nvme subsystem 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4420 00:07:39.420 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: none 00:07:39.420 sectype: none 00:07:39.420 =====Discovery Log Entry 4====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: nvme subsystem 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4420 00:07:39.420 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: none 00:07:39.420 sectype: none 00:07:39.420 =====Discovery Log Entry 5====== 00:07:39.420 trtype: tcp 00:07:39.420 adrfam: ipv4 00:07:39.420 subtype: discovery subsystem referral 00:07:39.420 treq: not required 00:07:39.420 portid: 0 00:07:39.420 trsvcid: 4430 00:07:39.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:39.420 traddr: 10.0.0.2 00:07:39.420 eflags: none 00:07:39.420 sectype: none 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:39.420 Perform nvmf subsystem discovery via RPC 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 [ 00:07:39.420 { 00:07:39.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:39.420 "subtype": "Discovery", 00:07:39.420 "listen_addresses": [ 00:07:39.420 { 00:07:39.420 "trtype": "TCP", 00:07:39.420 "adrfam": "IPv4", 00:07:39.420 "traddr": "10.0.0.2", 00:07:39.420 "trsvcid": "4420" 00:07:39.420 } 00:07:39.420 ], 00:07:39.420 "allow_any_host": true, 00:07:39.420 "hosts": [] 00:07:39.420 }, 00:07:39.420 { 00:07:39.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.420 "subtype": "NVMe", 00:07:39.420 "listen_addresses": [ 00:07:39.420 { 00:07:39.420 "trtype": "TCP", 00:07:39.420 "adrfam": "IPv4", 00:07:39.420 "traddr": "10.0.0.2", 00:07:39.420 "trsvcid": "4420" 00:07:39.420 } 00:07:39.420 ], 00:07:39.420 "allow_any_host": true, 00:07:39.420 "hosts": [], 00:07:39.420 "serial_number": "SPDK00000000000001", 00:07:39.420 "model_number": "SPDK bdev Controller", 00:07:39.420 "max_namespaces": 32, 00:07:39.420 "min_cntlid": 1, 00:07:39.420 "max_cntlid": 65519, 00:07:39.420 "namespaces": [ 00:07:39.420 { 00:07:39.420 "nsid": 1, 00:07:39.420 "bdev_name": "Null1", 00:07:39.420 "name": "Null1", 00:07:39.420 "nguid": "9A167AA96FAC47318D2431575B55E92E", 00:07:39.420 "uuid": "9a167aa9-6fac-4731-8d24-31575b55e92e" 00:07:39.420 } 00:07:39.420 ] 00:07:39.420 }, 00:07:39.420 { 00:07:39.420 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:39.420 "subtype": "NVMe", 00:07:39.420 "listen_addresses": [ 00:07:39.420 { 00:07:39.420 "trtype": "TCP", 00:07:39.420 "adrfam": "IPv4", 00:07:39.420 "traddr": "10.0.0.2", 00:07:39.420 "trsvcid": "4420" 00:07:39.420 } 00:07:39.420 ], 00:07:39.420 "allow_any_host": true, 00:07:39.420 "hosts": [], 00:07:39.420 "serial_number": "SPDK00000000000002", 00:07:39.420 "model_number": "SPDK bdev Controller", 00:07:39.420 "max_namespaces": 32, 00:07:39.420 "min_cntlid": 1, 00:07:39.420 "max_cntlid": 65519, 00:07:39.420 "namespaces": [ 00:07:39.420 { 00:07:39.420 "nsid": 1, 00:07:39.420 "bdev_name": "Null2", 00:07:39.420 "name": "Null2", 00:07:39.420 "nguid": "7FA21720846945798CA277DC4B59582C", 00:07:39.420 "uuid": "7fa21720-8469-4579-8ca2-77dc4b59582c" 00:07:39.420 } 00:07:39.420 ] 00:07:39.420 }, 00:07:39.420 { 00:07:39.420 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:39.420 "subtype": "NVMe", 00:07:39.420 "listen_addresses": [ 00:07:39.420 { 00:07:39.420 "trtype": "TCP", 00:07:39.420 "adrfam": "IPv4", 00:07:39.420 "traddr": "10.0.0.2", 00:07:39.420 "trsvcid": "4420" 00:07:39.420 } 00:07:39.420 ], 00:07:39.420 "allow_any_host": true, 00:07:39.420 "hosts": [], 00:07:39.420 "serial_number": "SPDK00000000000003", 00:07:39.420 "model_number": "SPDK bdev Controller", 00:07:39.420 "max_namespaces": 32, 00:07:39.420 "min_cntlid": 1, 00:07:39.420 "max_cntlid": 65519, 00:07:39.420 "namespaces": [ 00:07:39.420 { 00:07:39.420 "nsid": 1, 00:07:39.420 "bdev_name": "Null3", 00:07:39.420 "name": "Null3", 00:07:39.420 "nguid": "0A5CB35930DD45D9873A39AA927086A6", 00:07:39.420 "uuid": "0a5cb359-30dd-45d9-873a-39aa927086a6" 00:07:39.420 } 00:07:39.420 ] 00:07:39.420 }, 00:07:39.420 { 00:07:39.420 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:39.420 "subtype": "NVMe", 00:07:39.420 "listen_addresses": [ 00:07:39.420 { 00:07:39.420 "trtype": "TCP", 00:07:39.420 "adrfam": "IPv4", 00:07:39.420 "traddr": "10.0.0.2", 00:07:39.420 "trsvcid": "4420" 00:07:39.420 } 00:07:39.420 ], 00:07:39.420 "allow_any_host": true, 00:07:39.420 "hosts": [], 00:07:39.420 "serial_number": "SPDK00000000000004", 00:07:39.420 "model_number": "SPDK bdev Controller", 00:07:39.420 "max_namespaces": 32, 00:07:39.420 "min_cntlid": 1, 00:07:39.420 "max_cntlid": 65519, 00:07:39.420 "namespaces": [ 00:07:39.420 { 00:07:39.420 "nsid": 1, 00:07:39.420 "bdev_name": "Null4", 00:07:39.420 "name": "Null4", 00:07:39.420 "nguid": "331A88E085A8435F9009BF26166875B1", 00:07:39.420 "uuid": "331a88e0-85a8-435f-9009-bf26166875b1" 00:07:39.420 } 00:07:39.420 ] 00:07:39.420 } 00:07:39.420 ] 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:39.420 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.421 rmmod nvme_tcp 00:07:39.421 rmmod nvme_fabrics 00:07:39.421 rmmod nvme_keyring 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2182534 ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2182534 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2182534 ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2182534 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182534 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182534' 00:07:39.421 killing process with pid 2182534 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2182534 00:07:39.421 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2182534 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.681 14:34:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.224 14:35:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.224 00:07:42.224 real 0m9.530s 00:07:42.224 user 0m7.181s 00:07:42.224 sys 0m4.738s 00:07:42.224 14:35:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.224 14:35:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.224 ************************************ 00:07:42.224 END TEST nvmf_target_discovery 00:07:42.224 ************************************ 00:07:42.224 14:35:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:42.224 14:35:01 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:42.224 14:35:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.224 14:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.224 14:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.224 ************************************ 00:07:42.224 START TEST nvmf_referrals 00:07:42.224 ************************************ 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:42.224 * Looking for test storage... 00:07:42.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.224 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.225 14:35:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:47.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:47.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:47.504 Found net devices under 0000:86:00.0: cvl_0_0 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.504 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:47.505 Found net devices under 0000:86:00.1: cvl_0_1 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.505 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.764 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.764 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.764 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.764 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:47.764 00:07:47.764 --- 10.0.0.2 ping statistics --- 00:07:47.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.764 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:47.764 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:07:47.765 00:07:47.765 --- 10.0.0.1 ping statistics --- 00:07:47.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.765 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2186311 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2186311 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2186311 ']' 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.765 14:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.765 [2024-07-25 14:35:08.011484] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:07:47.765 [2024-07-25 14:35:08.011529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.765 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.024 [2024-07-25 14:35:08.071776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.024 [2024-07-25 14:35:08.146872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.024 [2024-07-25 14:35:08.146914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.024 [2024-07-25 14:35:08.146924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.024 [2024-07-25 14:35:08.146930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.024 [2024-07-25 14:35:08.146935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.024 [2024-07-25 14:35:08.146984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.024 [2024-07-25 14:35:08.147084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.024 [2024-07-25 14:35:08.147137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.024 [2024-07-25 14:35:08.147138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 [2024-07-25 14:35:08.857102] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 [2024-07-25 14:35:08.870510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.593 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:48.853 14:35:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.112 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.113 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.371 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.372 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.631 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.890 14:35:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.890 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.151 rmmod nvme_tcp 00:07:50.151 rmmod nvme_fabrics 00:07:50.151 rmmod nvme_keyring 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2186311 ']' 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2186311 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2186311 ']' 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2186311 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2186311 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2186311' 00:07:50.151 killing process with pid 2186311 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2186311 00:07:50.151 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2186311 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.411 14:35:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.363 14:35:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.363 00:07:52.363 real 0m10.627s 00:07:52.363 user 0m12.168s 00:07:52.363 sys 0m4.875s 00:07:52.363 14:35:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.363 14:35:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.363 ************************************ 00:07:52.363 END TEST nvmf_referrals 00:07:52.363 ************************************ 00:07:52.623 14:35:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:52.623 14:35:12 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:52.623 14:35:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.623 14:35:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.623 14:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.624 ************************************ 00:07:52.624 START TEST nvmf_connect_disconnect 00:07:52.624 ************************************ 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:52.624 * Looking for test storage... 00:07:52.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.624 14:35:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:57.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:57.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:57.915 Found net devices under 0000:86:00.0: cvl_0_0 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:57.915 Found net devices under 0000:86:00.1: cvl_0_1 00:07:57.915 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.916 14:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:07:57.916 00:07:57.916 --- 10.0.0.2 ping statistics --- 00:07:57.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.916 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:57.916 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:07:58.176 00:07:58.176 --- 10.0.0.1 ping statistics --- 00:07:58.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.176 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2190337 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2190337 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2190337 ']' 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.176 14:35:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.176 [2024-07-25 14:35:18.290349] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:07:58.176 [2024-07-25 14:35:18.290394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.176 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.176 [2024-07-25 14:35:18.349020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.176 [2024-07-25 14:35:18.429644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.176 [2024-07-25 14:35:18.429678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.176 [2024-07-25 14:35:18.429685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.176 [2024-07-25 14:35:18.429691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.176 [2024-07-25 14:35:18.429696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.176 [2024-07-25 14:35:18.429730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.176 [2024-07-25 14:35:18.429825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.176 [2024-07-25 14:35:18.429887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.176 [2024-07-25 14:35:18.429888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 [2024-07-25 14:35:19.142915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:59.112 [2024-07-25 14:35:19.194488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:59.112 14:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:02.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.560 rmmod nvme_tcp 00:08:15.560 rmmod nvme_fabrics 00:08:15.560 rmmod nvme_keyring 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2190337 ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2190337 ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2190337' 00:08:15.560 killing process with pid 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2190337 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.560 14:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.099 14:35:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.099 00:08:18.099 real 0m25.081s 00:08:18.099 user 1m10.348s 00:08:18.099 sys 0m5.070s 00:08:18.099 14:35:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.099 14:35:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 ************************************ 00:08:18.099 END TEST nvmf_connect_disconnect 00:08:18.099 ************************************ 00:08:18.099 14:35:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.099 14:35:37 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:18.099 14:35:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.099 14:35:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.099 14:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 ************************************ 00:08:18.099 START TEST nvmf_multitarget 00:08:18.099 ************************************ 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:18.099 * Looking for test storage... 00:08:18.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.099 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.100 14:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.419 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:23.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:23.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:23.420 Found net devices under 0000:86:00.0: cvl_0_0 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:23.420 Found net devices under 0000:86:00.1: cvl_0_1 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.420 14:35:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:23.420 00:08:23.420 --- 10.0.0.2 ping statistics --- 00:08:23.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.420 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:08:23.420 00:08:23.420 --- 10.0.0.1 ping statistics --- 00:08:23.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.420 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2196604 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2196604 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2196604 ']' 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.420 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 [2024-07-25 14:35:43.153605] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:08:23.421 [2024-07-25 14:35:43.153649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.421 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.421 [2024-07-25 14:35:43.212356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.421 [2024-07-25 14:35:43.285933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.421 [2024-07-25 14:35:43.285976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.421 [2024-07-25 14:35:43.285983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.421 [2024-07-25 14:35:43.285990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.421 [2024-07-25 14:35:43.285995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.421 [2024-07-25 14:35:43.286038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.421 [2024-07-25 14:35:43.286058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.421 [2024-07-25 14:35:43.286144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.421 [2024-07-25 14:35:43.286146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.679 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.679 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:23.679 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.679 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.679 14:35:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:23.938 14:35:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.938 14:35:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:23.938 14:35:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:23.938 14:35:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:23.938 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:23.938 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:23.938 "nvmf_tgt_1" 00:08:23.938 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:24.198 "nvmf_tgt_2" 00:08:24.198 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:24.198 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:24.198 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:24.198 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:24.457 true 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:24.457 true 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.457 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.457 rmmod nvme_tcp 00:08:24.717 rmmod nvme_fabrics 00:08:24.717 rmmod nvme_keyring 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2196604 ']' 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2196604 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2196604 ']' 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2196604 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2196604 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.717 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2196604' 00:08:24.717 killing process with pid 2196604 00:08:24.718 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2196604 00:08:24.718 14:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2196604 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.718 14:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.977 14:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.888 14:35:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.888 00:08:26.888 real 0m9.207s 00:08:26.888 user 0m8.911s 00:08:26.888 sys 0m4.323s 00:08:26.888 14:35:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.888 14:35:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:26.888 ************************************ 00:08:26.888 END TEST nvmf_multitarget 00:08:26.888 ************************************ 00:08:26.888 14:35:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:26.888 14:35:47 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:26.888 14:35:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:26.888 14:35:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.888 14:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.888 ************************************ 00:08:26.888 START TEST nvmf_rpc 00:08:26.888 ************************************ 00:08:26.888 14:35:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:27.148 * Looking for test storage... 00:08:27.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.148 14:35:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.440 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:32.441 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:32.441 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:32.441 Found net devices under 0000:86:00.0: cvl_0_0 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:32.441 Found net devices under 0000:86:00.1: cvl_0_1 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.441 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:08:32.701 00:08:32.701 --- 10.0.0.2 ping statistics --- 00:08:32.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.701 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:08:32.701 00:08:32.701 --- 10.0.0.1 ping statistics --- 00:08:32.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.701 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.701 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2200479 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2200479 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2200479 ']' 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.702 14:35:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 [2024-07-25 14:35:52.952259] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:08:32.702 [2024-07-25 14:35:52.952301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.702 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.963 [2024-07-25 14:35:53.013906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.963 [2024-07-25 14:35:53.089765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.963 [2024-07-25 14:35:53.089809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.963 [2024-07-25 14:35:53.089816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.963 [2024-07-25 14:35:53.089821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.963 [2024-07-25 14:35:53.089826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.963 [2024-07-25 14:35:53.089875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.963 [2024-07-25 14:35:53.089974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.963 [2024-07-25 14:35:53.090063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.963 [2024-07-25 14:35:53.090065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:33.534 "tick_rate": 2300000000, 00:08:33.534 "poll_groups": [ 00:08:33.534 { 00:08:33.534 "name": "nvmf_tgt_poll_group_000", 00:08:33.534 "admin_qpairs": 0, 00:08:33.534 "io_qpairs": 0, 00:08:33.534 "current_admin_qpairs": 0, 00:08:33.534 "current_io_qpairs": 0, 00:08:33.534 "pending_bdev_io": 0, 00:08:33.534 "completed_nvme_io": 0, 00:08:33.534 "transports": [] 00:08:33.534 }, 00:08:33.534 { 00:08:33.534 "name": "nvmf_tgt_poll_group_001", 00:08:33.534 "admin_qpairs": 0, 00:08:33.534 "io_qpairs": 0, 00:08:33.534 "current_admin_qpairs": 0, 00:08:33.534 "current_io_qpairs": 0, 00:08:33.534 "pending_bdev_io": 0, 00:08:33.534 "completed_nvme_io": 0, 00:08:33.534 "transports": [] 00:08:33.534 }, 00:08:33.534 { 00:08:33.534 "name": "nvmf_tgt_poll_group_002", 00:08:33.534 "admin_qpairs": 0, 00:08:33.534 "io_qpairs": 0, 00:08:33.534 "current_admin_qpairs": 0, 00:08:33.534 "current_io_qpairs": 0, 00:08:33.534 "pending_bdev_io": 0, 00:08:33.534 "completed_nvme_io": 0, 00:08:33.534 "transports": [] 00:08:33.534 }, 00:08:33.534 { 00:08:33.534 "name": "nvmf_tgt_poll_group_003", 00:08:33.534 "admin_qpairs": 0, 00:08:33.534 "io_qpairs": 0, 00:08:33.534 "current_admin_qpairs": 0, 00:08:33.534 "current_io_qpairs": 0, 00:08:33.534 "pending_bdev_io": 0, 00:08:33.534 "completed_nvme_io": 0, 00:08:33.534 "transports": [] 00:08:33.534 } 00:08:33.534 ] 00:08:33.534 }' 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:33.534 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:33.535 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:33.535 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 [2024-07-25 14:35:53.910272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:33.795 "tick_rate": 2300000000, 00:08:33.795 "poll_groups": [ 00:08:33.795 { 00:08:33.795 "name": "nvmf_tgt_poll_group_000", 00:08:33.795 "admin_qpairs": 0, 00:08:33.795 "io_qpairs": 0, 00:08:33.795 "current_admin_qpairs": 0, 00:08:33.795 "current_io_qpairs": 0, 00:08:33.795 "pending_bdev_io": 0, 00:08:33.795 "completed_nvme_io": 0, 00:08:33.795 "transports": [ 00:08:33.795 { 00:08:33.795 "trtype": "TCP" 00:08:33.795 } 00:08:33.795 ] 00:08:33.795 }, 00:08:33.795 { 00:08:33.795 "name": "nvmf_tgt_poll_group_001", 00:08:33.795 "admin_qpairs": 0, 00:08:33.795 "io_qpairs": 0, 00:08:33.795 "current_admin_qpairs": 0, 00:08:33.795 "current_io_qpairs": 0, 00:08:33.795 "pending_bdev_io": 0, 00:08:33.795 "completed_nvme_io": 0, 00:08:33.795 "transports": [ 00:08:33.795 { 00:08:33.795 "trtype": "TCP" 00:08:33.795 } 00:08:33.795 ] 00:08:33.795 }, 00:08:33.795 { 00:08:33.795 "name": "nvmf_tgt_poll_group_002", 00:08:33.795 "admin_qpairs": 0, 00:08:33.795 "io_qpairs": 0, 00:08:33.795 "current_admin_qpairs": 0, 00:08:33.795 "current_io_qpairs": 0, 00:08:33.795 "pending_bdev_io": 0, 00:08:33.795 "completed_nvme_io": 0, 00:08:33.795 "transports": [ 00:08:33.795 { 00:08:33.795 "trtype": "TCP" 00:08:33.795 } 00:08:33.795 ] 00:08:33.795 }, 00:08:33.795 { 00:08:33.795 "name": "nvmf_tgt_poll_group_003", 00:08:33.795 "admin_qpairs": 0, 00:08:33.795 "io_qpairs": 0, 00:08:33.795 "current_admin_qpairs": 0, 00:08:33.795 "current_io_qpairs": 0, 00:08:33.795 "pending_bdev_io": 0, 00:08:33.795 "completed_nvme_io": 0, 00:08:33.795 "transports": [ 00:08:33.795 { 00:08:33.795 "trtype": "TCP" 00:08:33.795 } 00:08:33.795 ] 00:08:33.795 } 00:08:33.795 ] 00:08:33.795 }' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:33.795 14:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 Malloc1 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.795 [2024-07-25 14:35:54.078478] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:33.795 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:33.796 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:33.796 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:33.796 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.796 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:34.056 [2024-07-25 14:35:54.103253] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:34.056 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:34.056 could not add new controller: failed to write to nvme-fabrics device 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.056 14:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.441 14:35:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.441 14:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.441 14:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.441 14:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.441 14:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.354 [2024-07-25 14:35:57.469178] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:37.354 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:37.354 could not add new controller: failed to write to nvme-fabrics device 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.354 14:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.739 14:35:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.739 14:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:38.739 14:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.739 14:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:38.739 14:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.650 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.651 [2024-07-25 14:36:00.798566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.651 14:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.032 14:36:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.032 14:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.032 14:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.032 14:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:42.032 14:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:43.941 14:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 [2024-07-25 14:36:04.053368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.941 14:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.325 14:36:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.325 14:36:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.325 14:36:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.325 14:36:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.325 14:36:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.249 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.249 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.249 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.249 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.249 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 [2024-07-25 14:36:07.393159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.250 14:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.634 14:36:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.634 14:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.634 14:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.634 14:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.634 14:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 [2024-07-25 14:36:10.661312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.544 14:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.483 14:36:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.483 14:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.483 14:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.483 14:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:51.483 14:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 [2024-07-25 14:36:13.899407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.024 14:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.964 14:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.964 14:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.964 14:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.964 14:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.964 14:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.876 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 [2024-07-25 14:36:17.196899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 [2024-07-25 14:36:17.245020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 [2024-07-25 14:36:17.297162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.137 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 [2024-07-25 14:36:17.345314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 [2024-07-25 14:36:17.393483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.138 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:57.399 "tick_rate": 2300000000, 00:08:57.399 "poll_groups": [ 00:08:57.399 { 00:08:57.399 "name": "nvmf_tgt_poll_group_000", 00:08:57.399 "admin_qpairs": 2, 00:08:57.399 "io_qpairs": 168, 00:08:57.399 "current_admin_qpairs": 0, 00:08:57.399 "current_io_qpairs": 0, 00:08:57.399 "pending_bdev_io": 0, 00:08:57.399 "completed_nvme_io": 267, 00:08:57.399 "transports": [ 00:08:57.399 { 00:08:57.399 "trtype": "TCP" 00:08:57.399 } 00:08:57.399 ] 00:08:57.399 }, 00:08:57.399 { 00:08:57.399 "name": "nvmf_tgt_poll_group_001", 00:08:57.399 "admin_qpairs": 2, 00:08:57.399 "io_qpairs": 168, 00:08:57.399 "current_admin_qpairs": 0, 00:08:57.399 "current_io_qpairs": 0, 00:08:57.399 "pending_bdev_io": 0, 00:08:57.399 "completed_nvme_io": 235, 00:08:57.399 "transports": [ 00:08:57.399 { 00:08:57.399 "trtype": "TCP" 00:08:57.399 } 00:08:57.399 ] 00:08:57.399 }, 00:08:57.399 { 00:08:57.399 "name": "nvmf_tgt_poll_group_002", 00:08:57.399 "admin_qpairs": 1, 00:08:57.399 "io_qpairs": 168, 00:08:57.399 "current_admin_qpairs": 0, 00:08:57.399 "current_io_qpairs": 0, 00:08:57.399 "pending_bdev_io": 0, 00:08:57.399 "completed_nvme_io": 302, 00:08:57.399 "transports": [ 00:08:57.399 { 00:08:57.399 "trtype": "TCP" 00:08:57.399 } 00:08:57.399 ] 00:08:57.399 }, 00:08:57.399 { 00:08:57.399 "name": "nvmf_tgt_poll_group_003", 00:08:57.399 "admin_qpairs": 2, 00:08:57.399 "io_qpairs": 168, 00:08:57.399 "current_admin_qpairs": 0, 00:08:57.399 "current_io_qpairs": 0, 00:08:57.399 "pending_bdev_io": 0, 00:08:57.399 "completed_nvme_io": 218, 00:08:57.399 "transports": [ 00:08:57.399 { 00:08:57.399 "trtype": "TCP" 00:08:57.399 } 00:08:57.399 ] 00:08:57.399 } 00:08:57.399 ] 00:08:57.399 }' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.399 rmmod nvme_tcp 00:08:57.399 rmmod nvme_fabrics 00:08:57.399 rmmod nvme_keyring 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2200479 ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2200479 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2200479 ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2200479 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2200479 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2200479' 00:08:57.399 killing process with pid 2200479 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2200479 00:08:57.399 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2200479 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.660 14:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.205 14:36:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.205 00:09:00.205 real 0m32.769s 00:09:00.205 user 1m40.269s 00:09:00.205 sys 0m5.787s 00:09:00.205 14:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.205 14:36:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 ************************************ 00:09:00.205 END TEST nvmf_rpc 00:09:00.205 ************************************ 00:09:00.205 14:36:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.205 14:36:19 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:00.205 14:36:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.205 14:36:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.205 14:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 ************************************ 00:09:00.205 START TEST nvmf_invalid 00:09:00.205 ************************************ 00:09:00.205 14:36:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:00.205 * Looking for test storage... 00:09:00.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.205 14:36:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.206 14:36:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.489 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.490 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.490 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:05.490 00:09:05.490 --- 10.0.0.2 ping statistics --- 00:09:05.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.490 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:09:05.490 00:09:05.490 --- 10.0.0.1 ping statistics --- 00:09:05.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.490 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2208709 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2208709 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2208709 ']' 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.490 14:36:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:05.490 [2024-07-25 14:36:25.708477] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:09:05.490 [2024-07-25 14:36:25.708525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.490 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.490 [2024-07-25 14:36:25.767380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.750 [2024-07-25 14:36:25.850298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.750 [2024-07-25 14:36:25.850332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.750 [2024-07-25 14:36:25.850339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.750 [2024-07-25 14:36:25.850346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.750 [2024-07-25 14:36:25.850351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.750 [2024-07-25 14:36:25.850396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.751 [2024-07-25 14:36:25.850490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.751 [2024-07-25 14:36:25.850551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.751 [2024-07-25 14:36:25.850552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.320 14:36:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.320 14:36:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:06.321 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3935 00:09:06.581 [2024-07-25 14:36:26.720413] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:06.581 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:06.581 { 00:09:06.581 "nqn": "nqn.2016-06.io.spdk:cnode3935", 00:09:06.581 "tgt_name": "foobar", 00:09:06.581 "method": "nvmf_create_subsystem", 00:09:06.581 "req_id": 1 00:09:06.581 } 00:09:06.581 Got JSON-RPC error response 00:09:06.581 response: 00:09:06.581 { 00:09:06.581 "code": -32603, 00:09:06.581 "message": "Unable to find target foobar" 00:09:06.581 }' 00:09:06.581 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:06.581 { 00:09:06.581 "nqn": "nqn.2016-06.io.spdk:cnode3935", 00:09:06.581 "tgt_name": "foobar", 00:09:06.581 "method": "nvmf_create_subsystem", 00:09:06.581 "req_id": 1 00:09:06.581 } 00:09:06.581 Got JSON-RPC error response 00:09:06.581 response: 00:09:06.581 { 00:09:06.581 "code": -32603, 00:09:06.581 "message": "Unable to find target foobar" 00:09:06.581 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:06.581 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:06.581 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25650 00:09:06.841 [2024-07-25 14:36:26.909096] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25650: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:06.841 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:06.841 { 00:09:06.841 "nqn": "nqn.2016-06.io.spdk:cnode25650", 00:09:06.841 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:06.841 "method": "nvmf_create_subsystem", 00:09:06.841 "req_id": 1 00:09:06.841 } 00:09:06.841 Got JSON-RPC error response 00:09:06.841 response: 00:09:06.841 { 00:09:06.842 "code": -32602, 00:09:06.842 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:06.842 }' 00:09:06.842 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:06.842 { 00:09:06.842 "nqn": "nqn.2016-06.io.spdk:cnode25650", 00:09:06.842 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:06.842 "method": "nvmf_create_subsystem", 00:09:06.842 "req_id": 1 00:09:06.842 } 00:09:06.842 Got JSON-RPC error response 00:09:06.842 response: 00:09:06.842 { 00:09:06.842 "code": -32602, 00:09:06.842 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:06.842 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:06.842 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:06.842 14:36:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12079 00:09:06.842 [2024-07-25 14:36:27.101681] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12079: invalid model number 'SPDK_Controller' 00:09:06.842 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:06.842 { 00:09:06.842 "nqn": "nqn.2016-06.io.spdk:cnode12079", 00:09:06.842 "model_number": "SPDK_Controller\u001f", 00:09:06.842 "method": "nvmf_create_subsystem", 00:09:06.842 "req_id": 1 00:09:06.842 } 00:09:06.842 Got JSON-RPC error response 00:09:06.842 response: 00:09:06.842 { 00:09:06.842 "code": -32602, 00:09:06.842 "message": "Invalid MN SPDK_Controller\u001f" 00:09:06.842 }' 00:09:06.842 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:06.842 { 00:09:06.842 "nqn": "nqn.2016-06.io.spdk:cnode12079", 00:09:06.842 "model_number": "SPDK_Controller\u001f", 00:09:06.842 "method": "nvmf_create_subsystem", 00:09:06.842 "req_id": 1 00:09:06.842 } 00:09:06.842 Got JSON-RPC error response 00:09:06.842 response: 00:09:06.842 { 00:09:06.842 "code": -32602, 00:09:06.842 "message": "Invalid MN SPDK_Controller\u001f" 00:09:06.842 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.103 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hB$4mF+v$CI1}*'\''+W$ebQ' 00:09:07.104 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hB$4mF+v$CI1}*'\''+W$ebQ' nqn.2016-06.io.spdk:cnode24418 00:09:07.366 [2024-07-25 14:36:27.426790] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24418: invalid serial number 'hB$4mF+v$CI1}*'+W$ebQ' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:07.366 { 00:09:07.366 "nqn": "nqn.2016-06.io.spdk:cnode24418", 00:09:07.366 "serial_number": "hB$4mF+v$CI1}*'\''+W$ebQ", 00:09:07.366 "method": "nvmf_create_subsystem", 00:09:07.366 "req_id": 1 00:09:07.366 } 00:09:07.366 Got JSON-RPC error response 00:09:07.366 response: 00:09:07.366 { 00:09:07.366 "code": -32602, 00:09:07.366 "message": "Invalid SN hB$4mF+v$CI1}*'\''+W$ebQ" 00:09:07.366 }' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:07.366 { 00:09:07.366 "nqn": "nqn.2016-06.io.spdk:cnode24418", 00:09:07.366 "serial_number": "hB$4mF+v$CI1}*'+W$ebQ", 00:09:07.366 "method": "nvmf_create_subsystem", 00:09:07.366 "req_id": 1 00:09:07.366 } 00:09:07.366 Got JSON-RPC error response 00:09:07.366 response: 00:09:07.366 { 00:09:07.366 "code": -32602, 00:09:07.366 "message": "Invalid SN hB$4mF+v$CI1}*'+W$ebQ" 00:09:07.366 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:07.366 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.367 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 't(X4H\tvCW'\'')b*)LITi5"N(4|Re$[De{q0.Qb#H#' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 't(X4H\tvCW'\'')b*)LITi5"N(4|Re$[De{q0.Qb#H#' nqn.2016-06.io.spdk:cnode28973 00:09:07.628 [2024-07-25 14:36:27.884334] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28973: invalid model number 't(X4H\tvCW')b*)LITi5"N(4|Re$[De{q0.Qb#H#' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:07.628 { 00:09:07.628 "nqn": "nqn.2016-06.io.spdk:cnode28973", 00:09:07.628 "model_number": "t(X4H\\t\u007fvCW'\'')b*)LITi5\"N(4|Re$[De{q0.Qb#H#", 00:09:07.628 "method": "nvmf_create_subsystem", 00:09:07.628 "req_id": 1 00:09:07.628 } 00:09:07.628 Got JSON-RPC error response 00:09:07.628 response: 00:09:07.628 { 00:09:07.628 "code": -32602, 00:09:07.628 "message": "Invalid MN t(X4H\\t\u007fvCW'\'')b*)LITi5\"N(4|Re$[De{q0.Qb#H#" 00:09:07.628 }' 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:07.628 { 00:09:07.628 "nqn": "nqn.2016-06.io.spdk:cnode28973", 00:09:07.628 "model_number": "t(X4H\\t\u007fvCW')b*)LITi5\"N(4|Re$[De{q0.Qb#H#", 00:09:07.628 "method": "nvmf_create_subsystem", 00:09:07.628 "req_id": 1 00:09:07.628 } 00:09:07.628 Got JSON-RPC error response 00:09:07.628 response: 00:09:07.628 { 00:09:07.628 "code": -32602, 00:09:07.628 "message": "Invalid MN t(X4H\\t\u007fvCW')b*)LITi5\"N(4|Re$[De{q0.Qb#H#" 00:09:07.628 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:07.628 14:36:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:07.888 [2024-07-25 14:36:28.069019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.888 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:08.146 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:08.146 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:08.146 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:08.146 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:08.147 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:08.406 [2024-07-25 14:36:28.451612] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:08.406 { 00:09:08.406 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:08.406 "listen_address": { 00:09:08.406 "trtype": "tcp", 00:09:08.406 "traddr": "", 00:09:08.406 "trsvcid": "4421" 00:09:08.406 }, 00:09:08.406 "method": "nvmf_subsystem_remove_listener", 00:09:08.406 "req_id": 1 00:09:08.406 } 00:09:08.406 Got JSON-RPC error response 00:09:08.406 response: 00:09:08.406 { 00:09:08.406 "code": -32602, 00:09:08.406 "message": "Invalid parameters" 00:09:08.406 }' 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:08.406 { 00:09:08.406 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:08.406 "listen_address": { 00:09:08.406 "trtype": "tcp", 00:09:08.406 "traddr": "", 00:09:08.406 "trsvcid": "4421" 00:09:08.406 }, 00:09:08.406 "method": "nvmf_subsystem_remove_listener", 00:09:08.406 "req_id": 1 00:09:08.406 } 00:09:08.406 Got JSON-RPC error response 00:09:08.406 response: 00:09:08.406 { 00:09:08.406 "code": -32602, 00:09:08.406 "message": "Invalid parameters" 00:09:08.406 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16236 -i 0 00:09:08.406 [2024-07-25 14:36:28.640196] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16236: invalid cntlid range [0-65519] 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:08.406 { 00:09:08.406 "nqn": "nqn.2016-06.io.spdk:cnode16236", 00:09:08.406 "min_cntlid": 0, 00:09:08.406 "method": "nvmf_create_subsystem", 00:09:08.406 "req_id": 1 00:09:08.406 } 00:09:08.406 Got JSON-RPC error response 00:09:08.406 response: 00:09:08.406 { 00:09:08.406 "code": -32602, 00:09:08.406 "message": "Invalid cntlid range [0-65519]" 00:09:08.406 }' 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:08.406 { 00:09:08.406 "nqn": "nqn.2016-06.io.spdk:cnode16236", 00:09:08.406 "min_cntlid": 0, 00:09:08.406 "method": "nvmf_create_subsystem", 00:09:08.406 "req_id": 1 00:09:08.406 } 00:09:08.406 Got JSON-RPC error response 00:09:08.406 response: 00:09:08.406 { 00:09:08.406 "code": -32602, 00:09:08.406 "message": "Invalid cntlid range [0-65519]" 00:09:08.406 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:08.406 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11851 -i 65520 00:09:08.667 [2024-07-25 14:36:28.824790] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11851: invalid cntlid range [65520-65519] 00:09:08.667 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:08.667 { 00:09:08.667 "nqn": "nqn.2016-06.io.spdk:cnode11851", 00:09:08.667 "min_cntlid": 65520, 00:09:08.667 "method": "nvmf_create_subsystem", 00:09:08.667 "req_id": 1 00:09:08.667 } 00:09:08.667 Got JSON-RPC error response 00:09:08.667 response: 00:09:08.667 { 00:09:08.667 "code": -32602, 00:09:08.667 "message": "Invalid cntlid range [65520-65519]" 00:09:08.667 }' 00:09:08.667 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:08.667 { 00:09:08.667 "nqn": "nqn.2016-06.io.spdk:cnode11851", 00:09:08.667 "min_cntlid": 65520, 00:09:08.667 "method": "nvmf_create_subsystem", 00:09:08.667 "req_id": 1 00:09:08.667 } 00:09:08.667 Got JSON-RPC error response 00:09:08.667 response: 00:09:08.667 { 00:09:08.667 "code": -32602, 00:09:08.667 "message": "Invalid cntlid range [65520-65519]" 00:09:08.667 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:08.667 14:36:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22833 -I 0 00:09:08.927 [2024-07-25 14:36:29.017482] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22833: invalid cntlid range [1-0] 00:09:08.927 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:08.927 { 00:09:08.927 "nqn": "nqn.2016-06.io.spdk:cnode22833", 00:09:08.927 "max_cntlid": 0, 00:09:08.927 "method": "nvmf_create_subsystem", 00:09:08.927 "req_id": 1 00:09:08.927 } 00:09:08.927 Got JSON-RPC error response 00:09:08.927 response: 00:09:08.927 { 00:09:08.927 "code": -32602, 00:09:08.927 "message": "Invalid cntlid range [1-0]" 00:09:08.927 }' 00:09:08.927 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:08.927 { 00:09:08.927 "nqn": "nqn.2016-06.io.spdk:cnode22833", 00:09:08.927 "max_cntlid": 0, 00:09:08.927 "method": "nvmf_create_subsystem", 00:09:08.927 "req_id": 1 00:09:08.927 } 00:09:08.927 Got JSON-RPC error response 00:09:08.927 response: 00:09:08.927 { 00:09:08.927 "code": -32602, 00:09:08.927 "message": "Invalid cntlid range [1-0]" 00:09:08.927 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:08.927 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6064 -I 65520 00:09:08.927 [2024-07-25 14:36:29.214111] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6064: invalid cntlid range [1-65520] 00:09:09.187 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:09.187 { 00:09:09.187 "nqn": "nqn.2016-06.io.spdk:cnode6064", 00:09:09.187 "max_cntlid": 65520, 00:09:09.187 "method": "nvmf_create_subsystem", 00:09:09.187 "req_id": 1 00:09:09.187 } 00:09:09.187 Got JSON-RPC error response 00:09:09.187 response: 00:09:09.187 { 00:09:09.187 "code": -32602, 00:09:09.187 "message": "Invalid cntlid range [1-65520]" 00:09:09.187 }' 00:09:09.187 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:09.187 { 00:09:09.187 "nqn": "nqn.2016-06.io.spdk:cnode6064", 00:09:09.187 "max_cntlid": 65520, 00:09:09.187 "method": "nvmf_create_subsystem", 00:09:09.187 "req_id": 1 00:09:09.187 } 00:09:09.187 Got JSON-RPC error response 00:09:09.187 response: 00:09:09.187 { 00:09:09.187 "code": -32602, 00:09:09.187 "message": "Invalid cntlid range [1-65520]" 00:09:09.187 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.187 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21153 -i 6 -I 5 00:09:09.187 [2024-07-25 14:36:29.402756] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21153: invalid cntlid range [6-5] 00:09:09.187 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:09.187 { 00:09:09.187 "nqn": "nqn.2016-06.io.spdk:cnode21153", 00:09:09.187 "min_cntlid": 6, 00:09:09.187 "max_cntlid": 5, 00:09:09.187 "method": "nvmf_create_subsystem", 00:09:09.187 "req_id": 1 00:09:09.188 } 00:09:09.188 Got JSON-RPC error response 00:09:09.188 response: 00:09:09.188 { 00:09:09.188 "code": -32602, 00:09:09.188 "message": "Invalid cntlid range [6-5]" 00:09:09.188 }' 00:09:09.188 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:09.188 { 00:09:09.188 "nqn": "nqn.2016-06.io.spdk:cnode21153", 00:09:09.188 "min_cntlid": 6, 00:09:09.188 "max_cntlid": 5, 00:09:09.188 "method": "nvmf_create_subsystem", 00:09:09.188 "req_id": 1 00:09:09.188 } 00:09:09.188 Got JSON-RPC error response 00:09:09.188 response: 00:09:09.188 { 00:09:09.188 "code": -32602, 00:09:09.188 "message": "Invalid cntlid range [6-5]" 00:09:09.188 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.188 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:09.447 { 00:09:09.447 "name": "foobar", 00:09:09.447 "method": "nvmf_delete_target", 00:09:09.447 "req_id": 1 00:09:09.447 } 00:09:09.447 Got JSON-RPC error response 00:09:09.447 response: 00:09:09.447 { 00:09:09.447 "code": -32602, 00:09:09.447 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:09.447 }' 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:09.447 { 00:09:09.447 "name": "foobar", 00:09:09.447 "method": "nvmf_delete_target", 00:09:09.447 "req_id": 1 00:09:09.447 } 00:09:09.447 Got JSON-RPC error response 00:09:09.447 response: 00:09:09.447 { 00:09:09.447 "code": -32602, 00:09:09.447 "message": "The specified target doesn't exist, cannot delete it." 00:09:09.447 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.447 rmmod nvme_tcp 00:09:09.447 rmmod nvme_fabrics 00:09:09.447 rmmod nvme_keyring 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.447 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2208709 ']' 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2208709 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2208709 ']' 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2208709 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208709 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208709' 00:09:09.448 killing process with pid 2208709 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2208709 00:09:09.448 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2208709 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.708 14:36:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.675 14:36:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.675 00:09:11.675 real 0m11.911s 00:09:11.675 user 0m19.659s 00:09:11.675 sys 0m5.188s 00:09:11.675 14:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.675 14:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:11.675 ************************************ 00:09:11.675 END TEST nvmf_invalid 00:09:11.675 ************************************ 00:09:11.675 14:36:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:11.675 14:36:31 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:11.675 14:36:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:11.675 14:36:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.675 14:36:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.675 ************************************ 00:09:11.675 START TEST nvmf_abort 00:09:11.675 ************************************ 00:09:11.675 14:36:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:11.937 * Looking for test storage... 00:09:11.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.937 14:36:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:17.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:17.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:17.221 Found net devices under 0000:86:00.0: cvl_0_0 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.221 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:17.222 Found net devices under 0000:86:00.1: cvl_0_1 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.222 14:36:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:09:17.222 00:09:17.222 --- 10.0.0.2 ping statistics --- 00:09:17.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.222 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.457 ms 00:09:17.222 00:09:17.222 --- 10.0.0.1 ping statistics --- 00:09:17.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.222 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2212872 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2212872 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2212872 ']' 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:17.222 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.222 [2024-07-25 14:36:37.097372] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:09:17.222 [2024-07-25 14:36:37.097420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.222 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.222 [2024-07-25 14:36:37.156866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.222 [2024-07-25 14:36:37.236754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.222 [2024-07-25 14:36:37.236789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.222 [2024-07-25 14:36:37.236796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.222 [2024-07-25 14:36:37.236802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.222 [2024-07-25 14:36:37.236807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.222 [2024-07-25 14:36:37.236852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.222 [2024-07-25 14:36:37.236870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.222 [2024-07-25 14:36:37.236872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 [2024-07-25 14:36:37.941638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 Malloc0 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 Delay0 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 [2024-07-25 14:36:38.014783] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.792 14:36:38 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:17.792 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.052 [2024-07-25 14:36:38.127919] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:20.591 Initializing NVMe Controllers 00:09:20.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:20.591 controller IO queue size 128 less than required 00:09:20.591 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:20.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:20.591 Initialization complete. Launching workers. 00:09:20.591 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 41980 00:09:20.591 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42042, failed to submit 62 00:09:20.591 success 41984, unsuccess 58, failed 0 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.591 rmmod nvme_tcp 00:09:20.591 rmmod nvme_fabrics 00:09:20.591 rmmod nvme_keyring 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2212872 ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2212872 ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212872' 00:09:20.591 killing process with pid 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2212872 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.591 14:36:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.500 14:36:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.500 00:09:22.500 real 0m10.755s 00:09:22.500 user 0m13.133s 00:09:22.500 sys 0m4.846s 00:09:22.500 14:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.500 14:36:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 ************************************ 00:09:22.500 END TEST nvmf_abort 00:09:22.500 ************************************ 00:09:22.500 14:36:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.500 14:36:42 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:22.500 14:36:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.500 14:36:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.500 14:36:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 ************************************ 00:09:22.500 START TEST nvmf_ns_hotplug_stress 00:09:22.500 ************************************ 00:09:22.500 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:22.759 * Looking for test storage... 00:09:22.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.759 14:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.042 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.042 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:28.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:28.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:28.043 Found net devices under 0000:86:00.0: cvl_0_0 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:28.043 Found net devices under 0000:86:00.1: cvl_0_1 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:28.043 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.303 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:28.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:09:28.564 00:09:28.564 --- 10.0.0.2 ping statistics --- 00:09:28.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.564 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:28.564 00:09:28.564 --- 10.0.0.1 ping statistics --- 00:09:28.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.564 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2217025 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2217025 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2217025 ']' 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.564 14:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.564 [2024-07-25 14:36:48.679794] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:09:28.564 [2024-07-25 14:36:48.679837] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.564 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.564 [2024-07-25 14:36:48.739402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.564 [2024-07-25 14:36:48.814971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.564 [2024-07-25 14:36:48.815013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.564 [2024-07-25 14:36:48.815023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.564 [2024-07-25 14:36:48.815029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.564 [2024-07-25 14:36:48.815034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.564 [2024-07-25 14:36:48.815154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.564 [2024-07-25 14:36:48.815259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.564 [2024-07-25 14:36:48.815260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.505 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:29.506 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.506 [2024-07-25 14:36:49.692031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.506 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.766 14:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.026 [2024-07-25 14:36:50.085594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.026 14:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.027 14:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:30.287 Malloc0 00:09:30.287 14:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.546 Delay0 00:09:30.546 14:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.546 14:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:30.806 NULL1 00:09:30.806 14:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:31.065 14:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2217382 00:09:31.065 14:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:31.065 14:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:31.065 14:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.065 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.444 Read completed with error (sct=0, sc=11) 00:09:32.444 14:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.444 14:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:32.444 14:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:32.444 true 00:09:32.702 14:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:32.702 14:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.270 14:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.529 14:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:33.529 14:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:33.788 true 00:09:33.788 14:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:33.788 14:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.047 14:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.047 14:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:34.047 14:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:34.305 true 00:09:34.305 14:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:34.305 14:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 14:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.712 14:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:35.712 14:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:35.712 true 00:09:35.972 14:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:35.972 14:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.540 14:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.799 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:36.799 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:37.058 true 00:09:37.058 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:37.058 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.318 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.318 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:37.318 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:37.576 true 00:09:37.577 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:37.577 14:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.954 14:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.954 14:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:38.954 14:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:39.213 true 00:09:39.213 14:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:39.213 14:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.150 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.150 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:40.150 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:40.410 true 00:09:40.410 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:40.410 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.410 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.670 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:40.670 14:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:40.929 true 00:09:40.929 14:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:40.929 14:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 14:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.310 14:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:42.310 14:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:42.569 true 00:09:42.569 14:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:42.569 14:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.507 14:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.507 14:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:43.507 14:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:43.766 true 00:09:43.766 14:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:43.766 14:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.766 14:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.026 14:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:44.026 14:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:44.286 true 00:09:44.286 14:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:44.286 14:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.665 14:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.665 14:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:45.665 14:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:45.665 true 00:09:45.665 14:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:45.665 14:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.602 14:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.861 14:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:46.861 14:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:46.861 true 00:09:46.861 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:46.861 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.121 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.381 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:47.381 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:47.381 true 00:09:47.640 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:47.640 14:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.578 14:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.837 14:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:48.837 14:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:49.096 true 00:09:49.096 14:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:49.096 14:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.079 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.079 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:50.079 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:50.339 true 00:09:50.339 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:50.339 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.339 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.598 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:50.598 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:50.857 true 00:09:50.857 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:50.857 14:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.794 14:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.054 14:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:52.054 14:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:52.312 true 00:09:52.313 14:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:52.313 14:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.262 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.262 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:53.262 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:53.525 true 00:09:53.525 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:53.525 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.784 14:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.784 14:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:53.784 14:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:54.043 true 00:09:54.043 14:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:54.043 14:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.422 14:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.422 14:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:55.422 14:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:55.422 true 00:09:55.422 14:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:55.422 14:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.357 14:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.616 14:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:56.616 14:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:56.616 true 00:09:56.875 14:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:56.875 14:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.875 14:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.134 14:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:57.134 14:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:57.393 true 00:09:57.393 14:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:57.393 14:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.331 14:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.590 14:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:58.590 14:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:58.849 true 00:09:58.849 14:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:09:58.849 14:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.786 14:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.786 14:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:59.786 14:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:00.046 true 00:10:00.046 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:10:00.046 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.046 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.305 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:00.305 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:00.564 true 00:10:00.564 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:10:00.564 14:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.943 Initializing NVMe Controllers 00:10:01.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:01.943 Controller IO queue size 128, less than required. 00:10:01.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:01.943 Controller IO queue size 128, less than required. 00:10:01.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:01.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:01.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:01.943 Initialization complete. Launching workers. 00:10:01.943 ======================================================== 00:10:01.943 Latency(us) 00:10:01.943 Device Information : IOPS MiB/s Average min max 00:10:01.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1801.27 0.88 50543.79 2227.00 1130561.26 00:10:01.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17491.27 8.54 7317.80 2283.90 380174.65 00:10:01.943 ======================================================== 00:10:01.943 Total : 19292.53 9.42 11353.64 2227.00 1130561.26 00:10:01.943 00:10:01.943 14:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.943 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:01.943 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:01.943 true 00:10:01.943 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2217382 00:10:01.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2217382) - No such process 00:10:01.943 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2217382 00:10:01.943 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.203 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.463 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:02.463 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:02.463 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:02.463 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.463 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:02.722 null0 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:02.722 null1 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.722 14:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:02.982 null2 00:10:02.982 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.982 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.982 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:03.241 null3 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:03.241 null4 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.241 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:03.499 null5 00:10:03.499 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.499 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.499 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:03.759 null6 00:10:03.759 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.759 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.759 14:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:03.759 null7 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.019 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2222952 2222953 2222956 2222957 2222959 2222961 2222963 2222965 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.020 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.281 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.541 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.802 14:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.802 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.062 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.323 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.324 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.589 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.590 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.850 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.110 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.111 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.371 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.632 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.892 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.892 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.893 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.893 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.893 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.893 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.893 14:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.893 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.154 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.414 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.675 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.675 rmmod nvme_tcp 00:10:07.675 rmmod nvme_fabrics 00:10:07.676 rmmod nvme_keyring 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2217025 ']' 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2217025 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2217025 ']' 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2217025 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.936 14:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217025 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217025' 00:10:07.936 killing process with pid 2217025 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2217025 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2217025 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.936 14:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.482 14:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:10.482 00:10:10.482 real 0m47.527s 00:10:10.482 user 3m10.408s 00:10:10.482 sys 0m15.397s 00:10:10.482 14:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.482 14:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.482 ************************************ 00:10:10.482 END TEST nvmf_ns_hotplug_stress 00:10:10.482 ************************************ 00:10:10.482 14:37:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:10.482 14:37:30 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:10.482 14:37:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:10.482 14:37:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.482 14:37:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.482 ************************************ 00:10:10.482 START TEST nvmf_connect_stress 00:10:10.482 ************************************ 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:10.482 * Looking for test storage... 00:10:10.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.482 14:37:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:10.483 14:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:15.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:15.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:15.766 Found net devices under 0000:86:00.0: cvl_0_0 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:15.766 Found net devices under 0000:86:00.1: cvl_0_1 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:15.766 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:15.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:15.767 00:10:15.767 --- 10.0.0.2 ping statistics --- 00:10:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.767 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:10:15.767 00:10:15.767 --- 10.0.0.1 ping statistics --- 00:10:15.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.767 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2227321 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2227321 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2227321 ']' 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.767 14:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.767 [2024-07-25 14:37:35.888262] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:10:15.767 [2024-07-25 14:37:35.888304] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.767 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.767 [2024-07-25 14:37:35.946680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.767 [2024-07-25 14:37:36.023606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.767 [2024-07-25 14:37:36.023640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.767 [2024-07-25 14:37:36.023647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.767 [2024-07-25 14:37:36.023653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.767 [2024-07-25 14:37:36.023658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.767 [2024-07-25 14:37:36.023770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.767 [2024-07-25 14:37:36.023865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.767 [2024-07-25 14:37:36.023866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.706 [2024-07-25 14:37:36.728529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.706 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.707 [2024-07-25 14:37:36.765340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.707 NULL1 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2227353 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.707 14:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.966 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.966 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:16.966 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.966 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.966 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.227 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.227 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:17.227 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.227 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.227 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.796 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:17.796 14:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.796 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.796 14:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.057 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.057 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:18.057 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.057 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.057 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.317 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.317 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:18.317 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.317 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.317 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.576 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.576 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:18.576 14:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.576 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.576 14:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.147 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.147 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:19.147 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.147 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.147 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.408 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.408 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:19.408 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.408 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.408 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.667 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.668 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:19.668 14:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.668 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.668 14:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.927 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.927 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:19.928 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.928 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.928 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.187 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.187 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:20.187 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.187 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.187 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.757 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.757 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:20.757 14:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.757 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.757 14:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.017 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.017 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:21.017 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.017 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.017 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.277 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.277 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:21.277 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.277 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.277 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.536 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.536 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:21.536 14:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.536 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.536 14:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.796 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.796 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:21.796 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.796 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.796 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.366 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.366 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:22.366 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.366 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.366 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.626 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.626 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:22.626 14:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.626 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.626 14:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.886 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.886 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:22.886 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.886 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.886 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.146 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.146 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:23.146 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.146 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.146 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.475 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.475 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:23.475 14:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.475 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.475 14:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.735 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.735 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:23.735 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.735 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.735 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.305 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.305 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:24.305 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.305 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.305 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.565 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.565 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:24.565 14:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.565 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.565 14:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.825 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.825 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:24.825 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.825 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.825 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.086 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.086 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:25.086 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.086 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.086 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.655 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.655 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:25.655 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.655 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.655 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.915 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.915 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:25.915 14:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.915 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.915 14:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.175 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.175 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:26.175 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.175 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.175 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.435 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.435 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:26.435 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.435 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.435 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.695 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2227353 00:10:26.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2227353) - No such process 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2227353 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.695 14:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.695 rmmod nvme_tcp 00:10:26.695 rmmod nvme_fabrics 00:10:26.955 rmmod nvme_keyring 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2227321 ']' 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2227321 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2227321 ']' 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2227321 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227321 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227321' 00:10:26.955 killing process with pid 2227321 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2227321 00:10:26.955 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2227321 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.215 14:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.125 14:37:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.125 00:10:29.125 real 0m18.973s 00:10:29.125 user 0m41.002s 00:10:29.125 sys 0m8.029s 00:10:29.125 14:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.125 14:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.125 ************************************ 00:10:29.125 END TEST nvmf_connect_stress 00:10:29.125 ************************************ 00:10:29.125 14:37:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:29.125 14:37:49 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:29.125 14:37:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.125 14:37:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.125 14:37:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.125 ************************************ 00:10:29.125 START TEST nvmf_fused_ordering 00:10:29.125 ************************************ 00:10:29.125 14:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:29.385 * Looking for test storage... 00:10:29.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.385 14:37:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.386 14:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:34.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:34.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:34.666 Found net devices under 0000:86:00.0: cvl_0_0 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.666 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:34.667 Found net devices under 0000:86:00.1: cvl_0_1 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.667 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.927 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.927 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:10:34.927 00:10:34.927 --- 10.0.0.2 ping statistics --- 00:10:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.927 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:10:34.927 14:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:10:34.927 00:10:34.927 --- 10.0.0.1 ping statistics --- 00:10:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.927 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2232603 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2232603 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2232603 ']' 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.927 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.927 [2024-07-25 14:37:55.074981] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:10:34.927 [2024-07-25 14:37:55.075023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.927 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.927 [2024-07-25 14:37:55.133752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.927 [2024-07-25 14:37:55.216628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.927 [2024-07-25 14:37:55.216661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.927 [2024-07-25 14:37:55.216668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.927 [2024-07-25 14:37:55.216674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.927 [2024-07-25 14:37:55.216680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.927 [2024-07-25 14:37:55.216697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 [2024-07-25 14:37:55.920350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 [2024-07-25 14:37:55.940479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 NULL1 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.869 14:37:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:35.869 [2024-07-25 14:37:55.990657] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:10:35.869 [2024-07-25 14:37:55.990686] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232744 ] 00:10:35.869 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.808 Attached to nqn.2016-06.io.spdk:cnode1 00:10:36.809 Namespace ID: 1 size: 1GB 00:10:36.809 fused_ordering(0) 00:10:36.809 fused_ordering(1) 00:10:36.809 fused_ordering(2) 00:10:36.809 fused_ordering(3) 00:10:36.809 fused_ordering(4) 00:10:36.809 fused_ordering(5) 00:10:36.809 fused_ordering(6) 00:10:36.809 fused_ordering(7) 00:10:36.809 fused_ordering(8) 00:10:36.809 fused_ordering(9) 00:10:36.809 fused_ordering(10) 00:10:36.809 fused_ordering(11) 00:10:36.809 fused_ordering(12) 00:10:36.809 fused_ordering(13) 00:10:36.809 fused_ordering(14) 00:10:36.809 fused_ordering(15) 00:10:36.809 fused_ordering(16) 00:10:36.809 fused_ordering(17) 00:10:36.809 fused_ordering(18) 00:10:36.809 fused_ordering(19) 00:10:36.809 fused_ordering(20) 00:10:36.809 fused_ordering(21) 00:10:36.809 fused_ordering(22) 00:10:36.809 fused_ordering(23) 00:10:36.809 fused_ordering(24) 00:10:36.809 fused_ordering(25) 00:10:36.809 fused_ordering(26) 00:10:36.809 fused_ordering(27) 00:10:36.809 fused_ordering(28) 00:10:36.809 fused_ordering(29) 00:10:36.809 fused_ordering(30) 00:10:36.809 fused_ordering(31) 00:10:36.809 fused_ordering(32) 00:10:36.809 fused_ordering(33) 00:10:36.809 fused_ordering(34) 00:10:36.809 fused_ordering(35) 00:10:36.809 fused_ordering(36) 00:10:36.809 fused_ordering(37) 00:10:36.809 fused_ordering(38) 00:10:36.809 fused_ordering(39) 00:10:36.809 fused_ordering(40) 00:10:36.809 fused_ordering(41) 00:10:36.809 fused_ordering(42) 00:10:36.809 fused_ordering(43) 00:10:36.809 fused_ordering(44) 00:10:36.809 fused_ordering(45) 00:10:36.809 fused_ordering(46) 00:10:36.809 fused_ordering(47) 00:10:36.809 fused_ordering(48) 00:10:36.809 fused_ordering(49) 00:10:36.809 fused_ordering(50) 00:10:36.809 fused_ordering(51) 00:10:36.809 fused_ordering(52) 00:10:36.809 fused_ordering(53) 00:10:36.809 fused_ordering(54) 00:10:36.809 fused_ordering(55) 00:10:36.809 fused_ordering(56) 00:10:36.809 fused_ordering(57) 00:10:36.809 fused_ordering(58) 00:10:36.809 fused_ordering(59) 00:10:36.809 fused_ordering(60) 00:10:36.809 fused_ordering(61) 00:10:36.809 fused_ordering(62) 00:10:36.809 fused_ordering(63) 00:10:36.809 fused_ordering(64) 00:10:36.809 fused_ordering(65) 00:10:36.809 fused_ordering(66) 00:10:36.809 fused_ordering(67) 00:10:36.809 fused_ordering(68) 00:10:36.809 fused_ordering(69) 00:10:36.809 fused_ordering(70) 00:10:36.809 fused_ordering(71) 00:10:36.809 fused_ordering(72) 00:10:36.809 fused_ordering(73) 00:10:36.809 fused_ordering(74) 00:10:36.809 fused_ordering(75) 00:10:36.809 fused_ordering(76) 00:10:36.809 fused_ordering(77) 00:10:36.809 fused_ordering(78) 00:10:36.809 fused_ordering(79) 00:10:36.809 fused_ordering(80) 00:10:36.809 fused_ordering(81) 00:10:36.809 fused_ordering(82) 00:10:36.809 fused_ordering(83) 00:10:36.809 fused_ordering(84) 00:10:36.809 fused_ordering(85) 00:10:36.809 fused_ordering(86) 00:10:36.809 fused_ordering(87) 00:10:36.809 fused_ordering(88) 00:10:36.809 fused_ordering(89) 00:10:36.809 fused_ordering(90) 00:10:36.809 fused_ordering(91) 00:10:36.809 fused_ordering(92) 00:10:36.809 fused_ordering(93) 00:10:36.809 fused_ordering(94) 00:10:36.809 fused_ordering(95) 00:10:36.809 fused_ordering(96) 00:10:36.809 fused_ordering(97) 00:10:36.809 fused_ordering(98) 00:10:36.809 fused_ordering(99) 00:10:36.809 fused_ordering(100) 00:10:36.809 fused_ordering(101) 00:10:36.809 fused_ordering(102) 00:10:36.809 fused_ordering(103) 00:10:36.809 fused_ordering(104) 00:10:36.809 fused_ordering(105) 00:10:36.809 fused_ordering(106) 00:10:36.809 fused_ordering(107) 00:10:36.809 fused_ordering(108) 00:10:36.809 fused_ordering(109) 00:10:36.809 fused_ordering(110) 00:10:36.809 fused_ordering(111) 00:10:36.809 fused_ordering(112) 00:10:36.809 fused_ordering(113) 00:10:36.809 fused_ordering(114) 00:10:36.809 fused_ordering(115) 00:10:36.809 fused_ordering(116) 00:10:36.809 fused_ordering(117) 00:10:36.809 fused_ordering(118) 00:10:36.809 fused_ordering(119) 00:10:36.809 fused_ordering(120) 00:10:36.809 fused_ordering(121) 00:10:36.809 fused_ordering(122) 00:10:36.809 fused_ordering(123) 00:10:36.809 fused_ordering(124) 00:10:36.809 fused_ordering(125) 00:10:36.809 fused_ordering(126) 00:10:36.809 fused_ordering(127) 00:10:36.809 fused_ordering(128) 00:10:36.809 fused_ordering(129) 00:10:36.809 fused_ordering(130) 00:10:36.809 fused_ordering(131) 00:10:36.809 fused_ordering(132) 00:10:36.809 fused_ordering(133) 00:10:36.809 fused_ordering(134) 00:10:36.809 fused_ordering(135) 00:10:36.809 fused_ordering(136) 00:10:36.809 fused_ordering(137) 00:10:36.809 fused_ordering(138) 00:10:36.809 fused_ordering(139) 00:10:36.809 fused_ordering(140) 00:10:36.809 fused_ordering(141) 00:10:36.809 fused_ordering(142) 00:10:36.809 fused_ordering(143) 00:10:36.809 fused_ordering(144) 00:10:36.809 fused_ordering(145) 00:10:36.809 fused_ordering(146) 00:10:36.809 fused_ordering(147) 00:10:36.809 fused_ordering(148) 00:10:36.809 fused_ordering(149) 00:10:36.809 fused_ordering(150) 00:10:36.809 fused_ordering(151) 00:10:36.809 fused_ordering(152) 00:10:36.809 fused_ordering(153) 00:10:36.809 fused_ordering(154) 00:10:36.809 fused_ordering(155) 00:10:36.809 fused_ordering(156) 00:10:36.809 fused_ordering(157) 00:10:36.809 fused_ordering(158) 00:10:36.809 fused_ordering(159) 00:10:36.809 fused_ordering(160) 00:10:36.809 fused_ordering(161) 00:10:36.809 fused_ordering(162) 00:10:36.809 fused_ordering(163) 00:10:36.809 fused_ordering(164) 00:10:36.809 fused_ordering(165) 00:10:36.809 fused_ordering(166) 00:10:36.809 fused_ordering(167) 00:10:36.809 fused_ordering(168) 00:10:36.809 fused_ordering(169) 00:10:36.809 fused_ordering(170) 00:10:36.809 fused_ordering(171) 00:10:36.809 fused_ordering(172) 00:10:36.809 fused_ordering(173) 00:10:36.809 fused_ordering(174) 00:10:36.809 fused_ordering(175) 00:10:36.809 fused_ordering(176) 00:10:36.809 fused_ordering(177) 00:10:36.809 fused_ordering(178) 00:10:36.809 fused_ordering(179) 00:10:36.809 fused_ordering(180) 00:10:36.809 fused_ordering(181) 00:10:36.809 fused_ordering(182) 00:10:36.809 fused_ordering(183) 00:10:36.809 fused_ordering(184) 00:10:36.809 fused_ordering(185) 00:10:36.809 fused_ordering(186) 00:10:36.809 fused_ordering(187) 00:10:36.809 fused_ordering(188) 00:10:36.809 fused_ordering(189) 00:10:36.809 fused_ordering(190) 00:10:36.809 fused_ordering(191) 00:10:36.809 fused_ordering(192) 00:10:36.809 fused_ordering(193) 00:10:36.809 fused_ordering(194) 00:10:36.809 fused_ordering(195) 00:10:36.809 fused_ordering(196) 00:10:36.809 fused_ordering(197) 00:10:36.809 fused_ordering(198) 00:10:36.809 fused_ordering(199) 00:10:36.809 fused_ordering(200) 00:10:36.809 fused_ordering(201) 00:10:36.809 fused_ordering(202) 00:10:36.809 fused_ordering(203) 00:10:36.809 fused_ordering(204) 00:10:36.809 fused_ordering(205) 00:10:37.748 fused_ordering(206) 00:10:37.748 fused_ordering(207) 00:10:37.748 fused_ordering(208) 00:10:37.748 fused_ordering(209) 00:10:37.748 fused_ordering(210) 00:10:37.748 fused_ordering(211) 00:10:37.748 fused_ordering(212) 00:10:37.748 fused_ordering(213) 00:10:37.748 fused_ordering(214) 00:10:37.748 fused_ordering(215) 00:10:37.748 fused_ordering(216) 00:10:37.748 fused_ordering(217) 00:10:37.748 fused_ordering(218) 00:10:37.748 fused_ordering(219) 00:10:37.748 fused_ordering(220) 00:10:37.748 fused_ordering(221) 00:10:37.748 fused_ordering(222) 00:10:37.748 fused_ordering(223) 00:10:37.748 fused_ordering(224) 00:10:37.748 fused_ordering(225) 00:10:37.748 fused_ordering(226) 00:10:37.748 fused_ordering(227) 00:10:37.748 fused_ordering(228) 00:10:37.748 fused_ordering(229) 00:10:37.748 fused_ordering(230) 00:10:37.748 fused_ordering(231) 00:10:37.748 fused_ordering(232) 00:10:37.748 fused_ordering(233) 00:10:37.748 fused_ordering(234) 00:10:37.748 fused_ordering(235) 00:10:37.748 fused_ordering(236) 00:10:37.748 fused_ordering(237) 00:10:37.748 fused_ordering(238) 00:10:37.748 fused_ordering(239) 00:10:37.748 fused_ordering(240) 00:10:37.748 fused_ordering(241) 00:10:37.748 fused_ordering(242) 00:10:37.748 fused_ordering(243) 00:10:37.748 fused_ordering(244) 00:10:37.748 fused_ordering(245) 00:10:37.748 fused_ordering(246) 00:10:37.748 fused_ordering(247) 00:10:37.748 fused_ordering(248) 00:10:37.748 fused_ordering(249) 00:10:37.748 fused_ordering(250) 00:10:37.748 fused_ordering(251) 00:10:37.748 fused_ordering(252) 00:10:37.748 fused_ordering(253) 00:10:37.748 fused_ordering(254) 00:10:37.748 fused_ordering(255) 00:10:37.748 fused_ordering(256) 00:10:37.748 fused_ordering(257) 00:10:37.748 fused_ordering(258) 00:10:37.748 fused_ordering(259) 00:10:37.748 fused_ordering(260) 00:10:37.748 fused_ordering(261) 00:10:37.748 fused_ordering(262) 00:10:37.748 fused_ordering(263) 00:10:37.748 fused_ordering(264) 00:10:37.748 fused_ordering(265) 00:10:37.748 fused_ordering(266) 00:10:37.748 fused_ordering(267) 00:10:37.748 fused_ordering(268) 00:10:37.748 fused_ordering(269) 00:10:37.748 fused_ordering(270) 00:10:37.748 fused_ordering(271) 00:10:37.748 fused_ordering(272) 00:10:37.748 fused_ordering(273) 00:10:37.748 fused_ordering(274) 00:10:37.748 fused_ordering(275) 00:10:37.748 fused_ordering(276) 00:10:37.748 fused_ordering(277) 00:10:37.748 fused_ordering(278) 00:10:37.748 fused_ordering(279) 00:10:37.748 fused_ordering(280) 00:10:37.749 fused_ordering(281) 00:10:37.749 fused_ordering(282) 00:10:37.749 fused_ordering(283) 00:10:37.749 fused_ordering(284) 00:10:37.749 fused_ordering(285) 00:10:37.749 fused_ordering(286) 00:10:37.749 fused_ordering(287) 00:10:37.749 fused_ordering(288) 00:10:37.749 fused_ordering(289) 00:10:37.749 fused_ordering(290) 00:10:37.749 fused_ordering(291) 00:10:37.749 fused_ordering(292) 00:10:37.749 fused_ordering(293) 00:10:37.749 fused_ordering(294) 00:10:37.749 fused_ordering(295) 00:10:37.749 fused_ordering(296) 00:10:37.749 fused_ordering(297) 00:10:37.749 fused_ordering(298) 00:10:37.749 fused_ordering(299) 00:10:37.749 fused_ordering(300) 00:10:37.749 fused_ordering(301) 00:10:37.749 fused_ordering(302) 00:10:37.749 fused_ordering(303) 00:10:37.749 fused_ordering(304) 00:10:37.749 fused_ordering(305) 00:10:37.749 fused_ordering(306) 00:10:37.749 fused_ordering(307) 00:10:37.749 fused_ordering(308) 00:10:37.749 fused_ordering(309) 00:10:37.749 fused_ordering(310) 00:10:37.749 fused_ordering(311) 00:10:37.749 fused_ordering(312) 00:10:37.749 fused_ordering(313) 00:10:37.749 fused_ordering(314) 00:10:37.749 fused_ordering(315) 00:10:37.749 fused_ordering(316) 00:10:37.749 fused_ordering(317) 00:10:37.749 fused_ordering(318) 00:10:37.749 fused_ordering(319) 00:10:37.749 fused_ordering(320) 00:10:37.749 fused_ordering(321) 00:10:37.749 fused_ordering(322) 00:10:37.749 fused_ordering(323) 00:10:37.749 fused_ordering(324) 00:10:37.749 fused_ordering(325) 00:10:37.749 fused_ordering(326) 00:10:37.749 fused_ordering(327) 00:10:37.749 fused_ordering(328) 00:10:37.749 fused_ordering(329) 00:10:37.749 fused_ordering(330) 00:10:37.749 fused_ordering(331) 00:10:37.749 fused_ordering(332) 00:10:37.749 fused_ordering(333) 00:10:37.749 fused_ordering(334) 00:10:37.749 fused_ordering(335) 00:10:37.749 fused_ordering(336) 00:10:37.749 fused_ordering(337) 00:10:37.749 fused_ordering(338) 00:10:37.749 fused_ordering(339) 00:10:37.749 fused_ordering(340) 00:10:37.749 fused_ordering(341) 00:10:37.749 fused_ordering(342) 00:10:37.749 fused_ordering(343) 00:10:37.749 fused_ordering(344) 00:10:37.749 fused_ordering(345) 00:10:37.749 fused_ordering(346) 00:10:37.749 fused_ordering(347) 00:10:37.749 fused_ordering(348) 00:10:37.749 fused_ordering(349) 00:10:37.749 fused_ordering(350) 00:10:37.749 fused_ordering(351) 00:10:37.749 fused_ordering(352) 00:10:37.749 fused_ordering(353) 00:10:37.749 fused_ordering(354) 00:10:37.749 fused_ordering(355) 00:10:37.749 fused_ordering(356) 00:10:37.749 fused_ordering(357) 00:10:37.749 fused_ordering(358) 00:10:37.749 fused_ordering(359) 00:10:37.749 fused_ordering(360) 00:10:37.749 fused_ordering(361) 00:10:37.749 fused_ordering(362) 00:10:37.749 fused_ordering(363) 00:10:37.749 fused_ordering(364) 00:10:37.749 fused_ordering(365) 00:10:37.749 fused_ordering(366) 00:10:37.749 fused_ordering(367) 00:10:37.749 fused_ordering(368) 00:10:37.749 fused_ordering(369) 00:10:37.749 fused_ordering(370) 00:10:37.749 fused_ordering(371) 00:10:37.749 fused_ordering(372) 00:10:37.749 fused_ordering(373) 00:10:37.749 fused_ordering(374) 00:10:37.749 fused_ordering(375) 00:10:37.749 fused_ordering(376) 00:10:37.749 fused_ordering(377) 00:10:37.749 fused_ordering(378) 00:10:37.749 fused_ordering(379) 00:10:37.749 fused_ordering(380) 00:10:37.749 fused_ordering(381) 00:10:37.749 fused_ordering(382) 00:10:37.749 fused_ordering(383) 00:10:37.749 fused_ordering(384) 00:10:37.749 fused_ordering(385) 00:10:37.749 fused_ordering(386) 00:10:37.749 fused_ordering(387) 00:10:37.749 fused_ordering(388) 00:10:37.749 fused_ordering(389) 00:10:37.749 fused_ordering(390) 00:10:37.749 fused_ordering(391) 00:10:37.749 fused_ordering(392) 00:10:37.749 fused_ordering(393) 00:10:37.749 fused_ordering(394) 00:10:37.749 fused_ordering(395) 00:10:37.749 fused_ordering(396) 00:10:37.749 fused_ordering(397) 00:10:37.749 fused_ordering(398) 00:10:37.749 fused_ordering(399) 00:10:37.749 fused_ordering(400) 00:10:37.749 fused_ordering(401) 00:10:37.749 fused_ordering(402) 00:10:37.749 fused_ordering(403) 00:10:37.749 fused_ordering(404) 00:10:37.749 fused_ordering(405) 00:10:37.749 fused_ordering(406) 00:10:37.749 fused_ordering(407) 00:10:37.749 fused_ordering(408) 00:10:37.749 fused_ordering(409) 00:10:37.749 fused_ordering(410) 00:10:38.691 fused_ordering(411) 00:10:38.691 fused_ordering(412) 00:10:38.691 fused_ordering(413) 00:10:38.691 fused_ordering(414) 00:10:38.691 fused_ordering(415) 00:10:38.691 fused_ordering(416) 00:10:38.691 fused_ordering(417) 00:10:38.691 fused_ordering(418) 00:10:38.691 fused_ordering(419) 00:10:38.691 fused_ordering(420) 00:10:38.691 fused_ordering(421) 00:10:38.691 fused_ordering(422) 00:10:38.691 fused_ordering(423) 00:10:38.691 fused_ordering(424) 00:10:38.691 fused_ordering(425) 00:10:38.691 fused_ordering(426) 00:10:38.691 fused_ordering(427) 00:10:38.691 fused_ordering(428) 00:10:38.691 fused_ordering(429) 00:10:38.691 fused_ordering(430) 00:10:38.691 fused_ordering(431) 00:10:38.691 fused_ordering(432) 00:10:38.691 fused_ordering(433) 00:10:38.691 fused_ordering(434) 00:10:38.691 fused_ordering(435) 00:10:38.691 fused_ordering(436) 00:10:38.691 fused_ordering(437) 00:10:38.691 fused_ordering(438) 00:10:38.691 fused_ordering(439) 00:10:38.691 fused_ordering(440) 00:10:38.691 fused_ordering(441) 00:10:38.691 fused_ordering(442) 00:10:38.691 fused_ordering(443) 00:10:38.691 fused_ordering(444) 00:10:38.691 fused_ordering(445) 00:10:38.691 fused_ordering(446) 00:10:38.691 fused_ordering(447) 00:10:38.691 fused_ordering(448) 00:10:38.691 fused_ordering(449) 00:10:38.691 fused_ordering(450) 00:10:38.691 fused_ordering(451) 00:10:38.691 fused_ordering(452) 00:10:38.691 fused_ordering(453) 00:10:38.691 fused_ordering(454) 00:10:38.691 fused_ordering(455) 00:10:38.691 fused_ordering(456) 00:10:38.691 fused_ordering(457) 00:10:38.691 fused_ordering(458) 00:10:38.691 fused_ordering(459) 00:10:38.691 fused_ordering(460) 00:10:38.691 fused_ordering(461) 00:10:38.691 fused_ordering(462) 00:10:38.691 fused_ordering(463) 00:10:38.691 fused_ordering(464) 00:10:38.691 fused_ordering(465) 00:10:38.691 fused_ordering(466) 00:10:38.691 fused_ordering(467) 00:10:38.691 fused_ordering(468) 00:10:38.691 fused_ordering(469) 00:10:38.691 fused_ordering(470) 00:10:38.691 fused_ordering(471) 00:10:38.691 fused_ordering(472) 00:10:38.691 fused_ordering(473) 00:10:38.691 fused_ordering(474) 00:10:38.691 fused_ordering(475) 00:10:38.691 fused_ordering(476) 00:10:38.691 fused_ordering(477) 00:10:38.691 fused_ordering(478) 00:10:38.691 fused_ordering(479) 00:10:38.691 fused_ordering(480) 00:10:38.691 fused_ordering(481) 00:10:38.691 fused_ordering(482) 00:10:38.691 fused_ordering(483) 00:10:38.691 fused_ordering(484) 00:10:38.691 fused_ordering(485) 00:10:38.691 fused_ordering(486) 00:10:38.691 fused_ordering(487) 00:10:38.691 fused_ordering(488) 00:10:38.691 fused_ordering(489) 00:10:38.691 fused_ordering(490) 00:10:38.692 fused_ordering(491) 00:10:38.692 fused_ordering(492) 00:10:38.692 fused_ordering(493) 00:10:38.692 fused_ordering(494) 00:10:38.692 fused_ordering(495) 00:10:38.692 fused_ordering(496) 00:10:38.692 fused_ordering(497) 00:10:38.692 fused_ordering(498) 00:10:38.692 fused_ordering(499) 00:10:38.692 fused_ordering(500) 00:10:38.692 fused_ordering(501) 00:10:38.692 fused_ordering(502) 00:10:38.692 fused_ordering(503) 00:10:38.692 fused_ordering(504) 00:10:38.692 fused_ordering(505) 00:10:38.692 fused_ordering(506) 00:10:38.692 fused_ordering(507) 00:10:38.692 fused_ordering(508) 00:10:38.692 fused_ordering(509) 00:10:38.692 fused_ordering(510) 00:10:38.692 fused_ordering(511) 00:10:38.692 fused_ordering(512) 00:10:38.692 fused_ordering(513) 00:10:38.692 fused_ordering(514) 00:10:38.692 fused_ordering(515) 00:10:38.692 fused_ordering(516) 00:10:38.692 fused_ordering(517) 00:10:38.692 fused_ordering(518) 00:10:38.692 fused_ordering(519) 00:10:38.692 fused_ordering(520) 00:10:38.692 fused_ordering(521) 00:10:38.692 fused_ordering(522) 00:10:38.692 fused_ordering(523) 00:10:38.692 fused_ordering(524) 00:10:38.692 fused_ordering(525) 00:10:38.692 fused_ordering(526) 00:10:38.692 fused_ordering(527) 00:10:38.692 fused_ordering(528) 00:10:38.692 fused_ordering(529) 00:10:38.692 fused_ordering(530) 00:10:38.692 fused_ordering(531) 00:10:38.692 fused_ordering(532) 00:10:38.692 fused_ordering(533) 00:10:38.692 fused_ordering(534) 00:10:38.692 fused_ordering(535) 00:10:38.692 fused_ordering(536) 00:10:38.692 fused_ordering(537) 00:10:38.692 fused_ordering(538) 00:10:38.692 fused_ordering(539) 00:10:38.692 fused_ordering(540) 00:10:38.692 fused_ordering(541) 00:10:38.692 fused_ordering(542) 00:10:38.692 fused_ordering(543) 00:10:38.692 fused_ordering(544) 00:10:38.692 fused_ordering(545) 00:10:38.692 fused_ordering(546) 00:10:38.692 fused_ordering(547) 00:10:38.692 fused_ordering(548) 00:10:38.692 fused_ordering(549) 00:10:38.692 fused_ordering(550) 00:10:38.692 fused_ordering(551) 00:10:38.692 fused_ordering(552) 00:10:38.692 fused_ordering(553) 00:10:38.692 fused_ordering(554) 00:10:38.692 fused_ordering(555) 00:10:38.692 fused_ordering(556) 00:10:38.692 fused_ordering(557) 00:10:38.692 fused_ordering(558) 00:10:38.692 fused_ordering(559) 00:10:38.692 fused_ordering(560) 00:10:38.692 fused_ordering(561) 00:10:38.692 fused_ordering(562) 00:10:38.692 fused_ordering(563) 00:10:38.692 fused_ordering(564) 00:10:38.692 fused_ordering(565) 00:10:38.692 fused_ordering(566) 00:10:38.692 fused_ordering(567) 00:10:38.692 fused_ordering(568) 00:10:38.692 fused_ordering(569) 00:10:38.692 fused_ordering(570) 00:10:38.692 fused_ordering(571) 00:10:38.692 fused_ordering(572) 00:10:38.692 fused_ordering(573) 00:10:38.692 fused_ordering(574) 00:10:38.692 fused_ordering(575) 00:10:38.692 fused_ordering(576) 00:10:38.692 fused_ordering(577) 00:10:38.692 fused_ordering(578) 00:10:38.692 fused_ordering(579) 00:10:38.692 fused_ordering(580) 00:10:38.692 fused_ordering(581) 00:10:38.692 fused_ordering(582) 00:10:38.692 fused_ordering(583) 00:10:38.692 fused_ordering(584) 00:10:38.692 fused_ordering(585) 00:10:38.692 fused_ordering(586) 00:10:38.692 fused_ordering(587) 00:10:38.692 fused_ordering(588) 00:10:38.692 fused_ordering(589) 00:10:38.692 fused_ordering(590) 00:10:38.692 fused_ordering(591) 00:10:38.692 fused_ordering(592) 00:10:38.692 fused_ordering(593) 00:10:38.692 fused_ordering(594) 00:10:38.692 fused_ordering(595) 00:10:38.692 fused_ordering(596) 00:10:38.692 fused_ordering(597) 00:10:38.692 fused_ordering(598) 00:10:38.692 fused_ordering(599) 00:10:38.692 fused_ordering(600) 00:10:38.692 fused_ordering(601) 00:10:38.692 fused_ordering(602) 00:10:38.692 fused_ordering(603) 00:10:38.692 fused_ordering(604) 00:10:38.692 fused_ordering(605) 00:10:38.692 fused_ordering(606) 00:10:38.692 fused_ordering(607) 00:10:38.692 fused_ordering(608) 00:10:38.692 fused_ordering(609) 00:10:38.692 fused_ordering(610) 00:10:38.692 fused_ordering(611) 00:10:38.692 fused_ordering(612) 00:10:38.692 fused_ordering(613) 00:10:38.692 fused_ordering(614) 00:10:38.692 fused_ordering(615) 00:10:39.634 fused_ordering(616) 00:10:39.634 fused_ordering(617) 00:10:39.634 fused_ordering(618) 00:10:39.634 fused_ordering(619) 00:10:39.634 fused_ordering(620) 00:10:39.634 fused_ordering(621) 00:10:39.634 fused_ordering(622) 00:10:39.634 fused_ordering(623) 00:10:39.634 fused_ordering(624) 00:10:39.634 fused_ordering(625) 00:10:39.634 fused_ordering(626) 00:10:39.634 fused_ordering(627) 00:10:39.634 fused_ordering(628) 00:10:39.634 fused_ordering(629) 00:10:39.634 fused_ordering(630) 00:10:39.634 fused_ordering(631) 00:10:39.634 fused_ordering(632) 00:10:39.634 fused_ordering(633) 00:10:39.634 fused_ordering(634) 00:10:39.634 fused_ordering(635) 00:10:39.634 fused_ordering(636) 00:10:39.634 fused_ordering(637) 00:10:39.634 fused_ordering(638) 00:10:39.634 fused_ordering(639) 00:10:39.634 fused_ordering(640) 00:10:39.634 fused_ordering(641) 00:10:39.634 fused_ordering(642) 00:10:39.634 fused_ordering(643) 00:10:39.634 fused_ordering(644) 00:10:39.634 fused_ordering(645) 00:10:39.634 fused_ordering(646) 00:10:39.634 fused_ordering(647) 00:10:39.634 fused_ordering(648) 00:10:39.634 fused_ordering(649) 00:10:39.634 fused_ordering(650) 00:10:39.634 fused_ordering(651) 00:10:39.634 fused_ordering(652) 00:10:39.634 fused_ordering(653) 00:10:39.634 fused_ordering(654) 00:10:39.634 fused_ordering(655) 00:10:39.634 fused_ordering(656) 00:10:39.634 fused_ordering(657) 00:10:39.634 fused_ordering(658) 00:10:39.634 fused_ordering(659) 00:10:39.634 fused_ordering(660) 00:10:39.634 fused_ordering(661) 00:10:39.634 fused_ordering(662) 00:10:39.634 fused_ordering(663) 00:10:39.634 fused_ordering(664) 00:10:39.634 fused_ordering(665) 00:10:39.634 fused_ordering(666) 00:10:39.634 fused_ordering(667) 00:10:39.634 fused_ordering(668) 00:10:39.634 fused_ordering(669) 00:10:39.634 fused_ordering(670) 00:10:39.634 fused_ordering(671) 00:10:39.634 fused_ordering(672) 00:10:39.634 fused_ordering(673) 00:10:39.634 fused_ordering(674) 00:10:39.634 fused_ordering(675) 00:10:39.634 fused_ordering(676) 00:10:39.634 fused_ordering(677) 00:10:39.634 fused_ordering(678) 00:10:39.634 fused_ordering(679) 00:10:39.634 fused_ordering(680) 00:10:39.634 fused_ordering(681) 00:10:39.634 fused_ordering(682) 00:10:39.634 fused_ordering(683) 00:10:39.634 fused_ordering(684) 00:10:39.634 fused_ordering(685) 00:10:39.634 fused_ordering(686) 00:10:39.634 fused_ordering(687) 00:10:39.634 fused_ordering(688) 00:10:39.634 fused_ordering(689) 00:10:39.634 fused_ordering(690) 00:10:39.634 fused_ordering(691) 00:10:39.634 fused_ordering(692) 00:10:39.634 fused_ordering(693) 00:10:39.634 fused_ordering(694) 00:10:39.634 fused_ordering(695) 00:10:39.634 fused_ordering(696) 00:10:39.634 fused_ordering(697) 00:10:39.634 fused_ordering(698) 00:10:39.634 fused_ordering(699) 00:10:39.634 fused_ordering(700) 00:10:39.634 fused_ordering(701) 00:10:39.634 fused_ordering(702) 00:10:39.634 fused_ordering(703) 00:10:39.634 fused_ordering(704) 00:10:39.634 fused_ordering(705) 00:10:39.634 fused_ordering(706) 00:10:39.634 fused_ordering(707) 00:10:39.634 fused_ordering(708) 00:10:39.634 fused_ordering(709) 00:10:39.634 fused_ordering(710) 00:10:39.635 fused_ordering(711) 00:10:39.635 fused_ordering(712) 00:10:39.635 fused_ordering(713) 00:10:39.635 fused_ordering(714) 00:10:39.635 fused_ordering(715) 00:10:39.635 fused_ordering(716) 00:10:39.635 fused_ordering(717) 00:10:39.635 fused_ordering(718) 00:10:39.635 fused_ordering(719) 00:10:39.635 fused_ordering(720) 00:10:39.635 fused_ordering(721) 00:10:39.635 fused_ordering(722) 00:10:39.635 fused_ordering(723) 00:10:39.635 fused_ordering(724) 00:10:39.635 fused_ordering(725) 00:10:39.635 fused_ordering(726) 00:10:39.635 fused_ordering(727) 00:10:39.635 fused_ordering(728) 00:10:39.635 fused_ordering(729) 00:10:39.635 fused_ordering(730) 00:10:39.635 fused_ordering(731) 00:10:39.635 fused_ordering(732) 00:10:39.635 fused_ordering(733) 00:10:39.635 fused_ordering(734) 00:10:39.635 fused_ordering(735) 00:10:39.635 fused_ordering(736) 00:10:39.635 fused_ordering(737) 00:10:39.635 fused_ordering(738) 00:10:39.635 fused_ordering(739) 00:10:39.635 fused_ordering(740) 00:10:39.635 fused_ordering(741) 00:10:39.635 fused_ordering(742) 00:10:39.635 fused_ordering(743) 00:10:39.635 fused_ordering(744) 00:10:39.635 fused_ordering(745) 00:10:39.635 fused_ordering(746) 00:10:39.635 fused_ordering(747) 00:10:39.635 fused_ordering(748) 00:10:39.635 fused_ordering(749) 00:10:39.635 fused_ordering(750) 00:10:39.635 fused_ordering(751) 00:10:39.635 fused_ordering(752) 00:10:39.635 fused_ordering(753) 00:10:39.635 fused_ordering(754) 00:10:39.635 fused_ordering(755) 00:10:39.635 fused_ordering(756) 00:10:39.635 fused_ordering(757) 00:10:39.635 fused_ordering(758) 00:10:39.635 fused_ordering(759) 00:10:39.635 fused_ordering(760) 00:10:39.635 fused_ordering(761) 00:10:39.635 fused_ordering(762) 00:10:39.635 fused_ordering(763) 00:10:39.635 fused_ordering(764) 00:10:39.635 fused_ordering(765) 00:10:39.635 fused_ordering(766) 00:10:39.635 fused_ordering(767) 00:10:39.635 fused_ordering(768) 00:10:39.635 fused_ordering(769) 00:10:39.635 fused_ordering(770) 00:10:39.635 fused_ordering(771) 00:10:39.635 fused_ordering(772) 00:10:39.635 fused_ordering(773) 00:10:39.635 fused_ordering(774) 00:10:39.635 fused_ordering(775) 00:10:39.635 fused_ordering(776) 00:10:39.635 fused_ordering(777) 00:10:39.635 fused_ordering(778) 00:10:39.635 fused_ordering(779) 00:10:39.635 fused_ordering(780) 00:10:39.635 fused_ordering(781) 00:10:39.635 fused_ordering(782) 00:10:39.635 fused_ordering(783) 00:10:39.635 fused_ordering(784) 00:10:39.635 fused_ordering(785) 00:10:39.635 fused_ordering(786) 00:10:39.635 fused_ordering(787) 00:10:39.635 fused_ordering(788) 00:10:39.635 fused_ordering(789) 00:10:39.635 fused_ordering(790) 00:10:39.635 fused_ordering(791) 00:10:39.635 fused_ordering(792) 00:10:39.635 fused_ordering(793) 00:10:39.635 fused_ordering(794) 00:10:39.635 fused_ordering(795) 00:10:39.635 fused_ordering(796) 00:10:39.635 fused_ordering(797) 00:10:39.635 fused_ordering(798) 00:10:39.635 fused_ordering(799) 00:10:39.635 fused_ordering(800) 00:10:39.635 fused_ordering(801) 00:10:39.635 fused_ordering(802) 00:10:39.635 fused_ordering(803) 00:10:39.635 fused_ordering(804) 00:10:39.635 fused_ordering(805) 00:10:39.635 fused_ordering(806) 00:10:39.635 fused_ordering(807) 00:10:39.635 fused_ordering(808) 00:10:39.635 fused_ordering(809) 00:10:39.635 fused_ordering(810) 00:10:39.635 fused_ordering(811) 00:10:39.635 fused_ordering(812) 00:10:39.635 fused_ordering(813) 00:10:39.635 fused_ordering(814) 00:10:39.635 fused_ordering(815) 00:10:39.635 fused_ordering(816) 00:10:39.635 fused_ordering(817) 00:10:39.635 fused_ordering(818) 00:10:39.635 fused_ordering(819) 00:10:39.635 fused_ordering(820) 00:10:41.025 fused_ordering(821) 00:10:41.025 fused_ordering(822) 00:10:41.025 fused_ordering(823) 00:10:41.025 fused_ordering(824) 00:10:41.025 fused_ordering(825) 00:10:41.025 fused_ordering(826) 00:10:41.025 fused_ordering(827) 00:10:41.025 fused_ordering(828) 00:10:41.025 fused_ordering(829) 00:10:41.025 fused_ordering(830) 00:10:41.025 fused_ordering(831) 00:10:41.025 fused_ordering(832) 00:10:41.025 fused_ordering(833) 00:10:41.025 fused_ordering(834) 00:10:41.025 fused_ordering(835) 00:10:41.026 fused_ordering(836) 00:10:41.026 fused_ordering(837) 00:10:41.026 fused_ordering(838) 00:10:41.026 fused_ordering(839) 00:10:41.026 fused_ordering(840) 00:10:41.026 fused_ordering(841) 00:10:41.026 fused_ordering(842) 00:10:41.026 fused_ordering(843) 00:10:41.026 fused_ordering(844) 00:10:41.026 fused_ordering(845) 00:10:41.026 fused_ordering(846) 00:10:41.026 fused_ordering(847) 00:10:41.026 fused_ordering(848) 00:10:41.026 fused_ordering(849) 00:10:41.026 fused_ordering(850) 00:10:41.026 fused_ordering(851) 00:10:41.026 fused_ordering(852) 00:10:41.026 fused_ordering(853) 00:10:41.026 fused_ordering(854) 00:10:41.026 fused_ordering(855) 00:10:41.026 fused_ordering(856) 00:10:41.026 fused_ordering(857) 00:10:41.026 fused_ordering(858) 00:10:41.026 fused_ordering(859) 00:10:41.026 fused_ordering(860) 00:10:41.026 fused_ordering(861) 00:10:41.026 fused_ordering(862) 00:10:41.026 fused_ordering(863) 00:10:41.026 fused_ordering(864) 00:10:41.026 fused_ordering(865) 00:10:41.026 fused_ordering(866) 00:10:41.026 fused_ordering(867) 00:10:41.026 fused_ordering(868) 00:10:41.026 fused_ordering(869) 00:10:41.026 fused_ordering(870) 00:10:41.026 fused_ordering(871) 00:10:41.026 fused_ordering(872) 00:10:41.026 fused_ordering(873) 00:10:41.026 fused_ordering(874) 00:10:41.026 fused_ordering(875) 00:10:41.026 fused_ordering(876) 00:10:41.026 fused_ordering(877) 00:10:41.026 fused_ordering(878) 00:10:41.026 fused_ordering(879) 00:10:41.026 fused_ordering(880) 00:10:41.026 fused_ordering(881) 00:10:41.026 fused_ordering(882) 00:10:41.026 fused_ordering(883) 00:10:41.026 fused_ordering(884) 00:10:41.026 fused_ordering(885) 00:10:41.026 fused_ordering(886) 00:10:41.026 fused_ordering(887) 00:10:41.026 fused_ordering(888) 00:10:41.026 fused_ordering(889) 00:10:41.026 fused_ordering(890) 00:10:41.026 fused_ordering(891) 00:10:41.026 fused_ordering(892) 00:10:41.026 fused_ordering(893) 00:10:41.026 fused_ordering(894) 00:10:41.026 fused_ordering(895) 00:10:41.026 fused_ordering(896) 00:10:41.026 fused_ordering(897) 00:10:41.026 fused_ordering(898) 00:10:41.026 fused_ordering(899) 00:10:41.026 fused_ordering(900) 00:10:41.026 fused_ordering(901) 00:10:41.026 fused_ordering(902) 00:10:41.026 fused_ordering(903) 00:10:41.026 fused_ordering(904) 00:10:41.026 fused_ordering(905) 00:10:41.026 fused_ordering(906) 00:10:41.026 fused_ordering(907) 00:10:41.026 fused_ordering(908) 00:10:41.026 fused_ordering(909) 00:10:41.026 fused_ordering(910) 00:10:41.026 fused_ordering(911) 00:10:41.026 fused_ordering(912) 00:10:41.026 fused_ordering(913) 00:10:41.026 fused_ordering(914) 00:10:41.026 fused_ordering(915) 00:10:41.026 fused_ordering(916) 00:10:41.026 fused_ordering(917) 00:10:41.026 fused_ordering(918) 00:10:41.026 fused_ordering(919) 00:10:41.026 fused_ordering(920) 00:10:41.026 fused_ordering(921) 00:10:41.026 fused_ordering(922) 00:10:41.026 fused_ordering(923) 00:10:41.026 fused_ordering(924) 00:10:41.026 fused_ordering(925) 00:10:41.026 fused_ordering(926) 00:10:41.026 fused_ordering(927) 00:10:41.026 fused_ordering(928) 00:10:41.026 fused_ordering(929) 00:10:41.026 fused_ordering(930) 00:10:41.026 fused_ordering(931) 00:10:41.026 fused_ordering(932) 00:10:41.026 fused_ordering(933) 00:10:41.026 fused_ordering(934) 00:10:41.026 fused_ordering(935) 00:10:41.026 fused_ordering(936) 00:10:41.026 fused_ordering(937) 00:10:41.026 fused_ordering(938) 00:10:41.026 fused_ordering(939) 00:10:41.026 fused_ordering(940) 00:10:41.026 fused_ordering(941) 00:10:41.026 fused_ordering(942) 00:10:41.026 fused_ordering(943) 00:10:41.026 fused_ordering(944) 00:10:41.026 fused_ordering(945) 00:10:41.026 fused_ordering(946) 00:10:41.026 fused_ordering(947) 00:10:41.026 fused_ordering(948) 00:10:41.026 fused_ordering(949) 00:10:41.026 fused_ordering(950) 00:10:41.026 fused_ordering(951) 00:10:41.026 fused_ordering(952) 00:10:41.026 fused_ordering(953) 00:10:41.026 fused_ordering(954) 00:10:41.026 fused_ordering(955) 00:10:41.026 fused_ordering(956) 00:10:41.026 fused_ordering(957) 00:10:41.026 fused_ordering(958) 00:10:41.026 fused_ordering(959) 00:10:41.026 fused_ordering(960) 00:10:41.026 fused_ordering(961) 00:10:41.026 fused_ordering(962) 00:10:41.026 fused_ordering(963) 00:10:41.026 fused_ordering(964) 00:10:41.026 fused_ordering(965) 00:10:41.026 fused_ordering(966) 00:10:41.026 fused_ordering(967) 00:10:41.026 fused_ordering(968) 00:10:41.026 fused_ordering(969) 00:10:41.026 fused_ordering(970) 00:10:41.026 fused_ordering(971) 00:10:41.026 fused_ordering(972) 00:10:41.026 fused_ordering(973) 00:10:41.026 fused_ordering(974) 00:10:41.026 fused_ordering(975) 00:10:41.026 fused_ordering(976) 00:10:41.026 fused_ordering(977) 00:10:41.026 fused_ordering(978) 00:10:41.026 fused_ordering(979) 00:10:41.026 fused_ordering(980) 00:10:41.026 fused_ordering(981) 00:10:41.026 fused_ordering(982) 00:10:41.026 fused_ordering(983) 00:10:41.026 fused_ordering(984) 00:10:41.026 fused_ordering(985) 00:10:41.026 fused_ordering(986) 00:10:41.026 fused_ordering(987) 00:10:41.026 fused_ordering(988) 00:10:41.026 fused_ordering(989) 00:10:41.026 fused_ordering(990) 00:10:41.026 fused_ordering(991) 00:10:41.026 fused_ordering(992) 00:10:41.026 fused_ordering(993) 00:10:41.026 fused_ordering(994) 00:10:41.026 fused_ordering(995) 00:10:41.026 fused_ordering(996) 00:10:41.026 fused_ordering(997) 00:10:41.026 fused_ordering(998) 00:10:41.026 fused_ordering(999) 00:10:41.026 fused_ordering(1000) 00:10:41.026 fused_ordering(1001) 00:10:41.026 fused_ordering(1002) 00:10:41.026 fused_ordering(1003) 00:10:41.026 fused_ordering(1004) 00:10:41.026 fused_ordering(1005) 00:10:41.026 fused_ordering(1006) 00:10:41.026 fused_ordering(1007) 00:10:41.026 fused_ordering(1008) 00:10:41.026 fused_ordering(1009) 00:10:41.026 fused_ordering(1010) 00:10:41.026 fused_ordering(1011) 00:10:41.026 fused_ordering(1012) 00:10:41.026 fused_ordering(1013) 00:10:41.026 fused_ordering(1014) 00:10:41.026 fused_ordering(1015) 00:10:41.026 fused_ordering(1016) 00:10:41.026 fused_ordering(1017) 00:10:41.026 fused_ordering(1018) 00:10:41.026 fused_ordering(1019) 00:10:41.026 fused_ordering(1020) 00:10:41.026 fused_ordering(1021) 00:10:41.026 fused_ordering(1022) 00:10:41.026 fused_ordering(1023) 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.026 rmmod nvme_tcp 00:10:41.026 rmmod nvme_fabrics 00:10:41.026 rmmod nvme_keyring 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2232603 ']' 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2232603 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2232603 ']' 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2232603 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:41.026 14:38:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.027 14:38:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2232603 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2232603' 00:10:41.027 killing process with pid 2232603 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2232603 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2232603 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.027 14:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.971 14:38:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.232 00:10:43.232 real 0m13.870s 00:10:43.232 user 0m9.337s 00:10:43.232 sys 0m7.768s 00:10:43.232 14:38:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.232 14:38:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:43.232 ************************************ 00:10:43.232 END TEST nvmf_fused_ordering 00:10:43.232 ************************************ 00:10:43.232 14:38:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.232 14:38:03 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:43.232 14:38:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.232 14:38:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.232 14:38:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.232 ************************************ 00:10:43.232 START TEST nvmf_delete_subsystem 00:10:43.232 ************************************ 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:43.232 * Looking for test storage... 00:10:43.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.232 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.233 14:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.520 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:48.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:48.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:48.521 Found net devices under 0000:86:00.0: cvl_0_0 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:48.521 Found net devices under 0000:86:00.1: cvl_0_1 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:10:48.521 00:10:48.521 --- 10.0.0.2 ping statistics --- 00:10:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.521 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:10:48.521 00:10:48.521 --- 10.0.0.1 ping statistics --- 00:10:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.521 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2236951 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2236951 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2236951 ']' 00:10:48.521 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.522 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.522 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.522 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.522 14:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.522 [2024-07-25 14:38:08.564435] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:10:48.522 [2024-07-25 14:38:08.564477] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.522 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.522 [2024-07-25 14:38:08.621752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.522 [2024-07-25 14:38:08.698228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.522 [2024-07-25 14:38:08.698266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.522 [2024-07-25 14:38:08.698273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.522 [2024-07-25 14:38:08.698279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.522 [2024-07-25 14:38:08.698285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.522 [2024-07-25 14:38:08.698346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.522 [2024-07-25 14:38:08.698349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.092 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.092 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.093 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 [2024-07-25 14:38:09.390077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 [2024-07-25 14:38:09.406233] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 NULL1 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 Delay0 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2237194 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:49.353 14:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:49.353 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.353 [2024-07-25 14:38:09.480904] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:51.263 14:38:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.263 14:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.263 14:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.524 [2024-07-25 14:38:11.579737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec2c000c00 is same with the state(5) to be set 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Write completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 Read completed with error (sct=0, sc=8) 00:10:51.524 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 starting I/O failed: -6 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 Write completed with error (sct=0, sc=8) 00:10:51.525 Read completed with error (sct=0, sc=8) 00:10:51.525 [2024-07-25 14:38:11.580513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796700 is same with the state(5) to be set 00:10:52.467 [2024-07-25 14:38:12.538876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1797ac0 is same with the state(5) to be set 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 [2024-07-25 14:38:12.581295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec2c00d310 is same with the state(5) to be set 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 [2024-07-25 14:38:12.581944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796a20 is same with the state(5) to be set 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Write completed with error (sct=0, sc=8) 00:10:52.467 Read completed with error (sct=0, sc=8) 00:10:52.467 [2024-07-25 14:38:12.582706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796000 is same with the state(5) to be set 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Write completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 Read completed with error (sct=0, sc=8) 00:10:52.468 [2024-07-25 14:38:12.582844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17963e0 is same with the state(5) to be set 00:10:52.468 Initializing NVMe Controllers 00:10:52.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.468 Controller IO queue size 128, less than required. 00:10:52.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:52.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:52.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:52.503 Initialization complete. Launching workers. 00:10:52.503 ======================================================== 00:10:52.503 Latency(us) 00:10:52.503 Device Information : IOPS MiB/s Average min max 00:10:52.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.02 0.09 949971.05 902.05 1013189.57 00:10:52.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.29 0.08 880735.83 271.35 1013289.49 00:10:52.503 ======================================================== 00:10:52.503 Total : 343.31 0.17 918855.22 271.35 1013289.49 00:10:52.503 00:10:52.503 [2024-07-25 14:38:12.583509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1797ac0 (9): Bad file descriptor 00:10:52.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:52.503 14:38:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.503 14:38:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:52.503 14:38:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2237194 00:10:52.503 14:38:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2237194 00:10:53.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2237194) - No such process 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2237194 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2237194 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2237194 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.077 [2024-07-25 14:38:13.111191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2237721 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:53.077 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.077 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.077 [2024-07-25 14:38:13.173633] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:53.646 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.646 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:53.646 14:38:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.905 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.905 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:53.905 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.476 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.476 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:54.476 14:38:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.046 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.046 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:55.046 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.616 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.616 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:55.616 14:38:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.876 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.876 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:55.876 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.137 Initializing NVMe Controllers 00:10:56.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.137 Controller IO queue size 128, less than required. 00:10:56.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:56.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:56.137 Initialization complete. Launching workers. 00:10:56.137 ======================================================== 00:10:56.137 Latency(us) 00:10:56.137 Device Information : IOPS MiB/s Average min max 00:10:56.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004318.81 1000450.82 1010580.71 00:10:56.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005777.60 1000457.72 1042732.66 00:10:56.137 ======================================================== 00:10:56.137 Total : 256.00 0.12 1005048.21 1000450.82 1042732.66 00:10:56.137 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2237721 00:10:56.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2237721) - No such process 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2237721 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.397 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.397 rmmod nvme_tcp 00:10:56.397 rmmod nvme_fabrics 00:10:56.657 rmmod nvme_keyring 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2236951 ']' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2236951 ']' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2236951' 00:10:56.657 killing process with pid 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2236951 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.657 14:38:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.199 14:38:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.199 00:10:59.199 real 0m15.665s 00:10:59.199 user 0m29.975s 00:10:59.199 sys 0m4.649s 00:10:59.199 14:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.199 14:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.199 ************************************ 00:10:59.199 END TEST nvmf_delete_subsystem 00:10:59.199 ************************************ 00:10:59.199 14:38:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:59.199 14:38:19 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:59.199 14:38:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.199 14:38:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.199 14:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.199 ************************************ 00:10:59.199 START TEST nvmf_ns_masking 00:10:59.199 ************************************ 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:59.199 * Looking for test storage... 00:10:59.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:59.199 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=966319cc-a970-474d-867c-aeae2b2d15b7 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bd6b583a-c7f4-4f2d-8371-5bacb30b3649 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4ec6a5cc-ac13-430e-8a6f-7ab014f5baf0 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.200 14:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.488 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.488 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.488 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:11:04.488 00:11:04.488 --- 10.0.0.2 ping statistics --- 00:11:04.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.489 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:11:04.489 00:11:04.489 --- 10.0.0.1 ping statistics --- 00:11:04.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.489 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2241871 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2241871 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2241871 ']' 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.489 14:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.489 [2024-07-25 14:38:24.574098] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:04.489 [2024-07-25 14:38:24.574144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.489 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.489 [2024-07-25 14:38:24.632027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.489 [2024-07-25 14:38:24.707639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.489 [2024-07-25 14:38:24.707676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.489 [2024-07-25 14:38:24.707683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.489 [2024-07-25 14:38:24.707689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.489 [2024-07-25 14:38:24.707694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.489 [2024-07-25 14:38:24.707717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.101 14:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.101 14:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:05.101 14:38:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.101 14:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.101 14:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:05.361 14:38:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.361 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.361 [2024-07-25 14:38:25.559899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.361 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:05.362 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:05.362 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:05.622 Malloc1 00:11:05.622 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:05.882 Malloc2 00:11:05.882 14:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.882 14:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:06.142 14:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.402 [2024-07-25 14:38:26.448666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec6a5cc-ac13-430e-8a6f-7ab014f5baf0 -a 10.0.0.2 -s 4420 -i 4 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:06.402 14:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.941 [ 0]:0x1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c69109e8bac44016a958abcf3e5aa7b1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c69109e8bac44016a958abcf3e5aa7b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.941 [ 0]:0x1 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.941 14:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c69109e8bac44016a958abcf3e5aa7b1 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c69109e8bac44016a958abcf3e5aa7b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.941 [ 1]:0x2 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.941 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.201 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:09.201 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:09.201 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec6a5cc-ac13-430e-8a6f-7ab014f5baf0 -a 10.0.0.2 -s 4420 -i 4 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:09.461 14:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.001 [ 0]:0x2 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.001 14:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.001 [ 0]:0x1 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c69109e8bac44016a958abcf3e5aa7b1 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c69109e8bac44016a958abcf3e5aa7b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.001 [ 1]:0x2 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.001 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.261 [ 0]:0x2 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:12.261 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.521 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4ec6a5cc-ac13-430e-8a6f-7ab014f5baf0 -a 10.0.0.2 -s 4420 -i 4 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:12.779 14:38:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.322 14:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.322 14:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.322 14:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:15.322 [ 0]:0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c69109e8bac44016a958abcf3e5aa7b1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c69109e8bac44016a958abcf3e5aa7b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.322 [ 1]:0x2 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.322 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:15.322 [ 0]:0x2 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:15.583 [2024-07-25 14:38:35.823591] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:15.583 request: 00:11:15.583 { 00:11:15.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.583 "nsid": 2, 00:11:15.583 "host": "nqn.2016-06.io.spdk:host1", 00:11:15.583 "method": "nvmf_ns_remove_host", 00:11:15.583 "req_id": 1 00:11:15.583 } 00:11:15.583 Got JSON-RPC error response 00:11:15.583 response: 00:11:15.583 { 00:11:15.583 "code": -32602, 00:11:15.583 "message": "Invalid parameters" 00:11:15.583 } 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.583 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:15.843 [ 0]:0x2 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.843 14:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7f10c51779df42bb9042e107329bea16 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7f10c51779df42bb9042e107329bea16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2243890 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2243890 /var/tmp/host.sock 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2243890 ']' 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:15.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.843 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:16.103 [2024-07-25 14:38:36.179584] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:16.103 [2024-07-25 14:38:36.179628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243890 ] 00:11:16.103 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.103 [2024-07-25 14:38:36.232624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.103 [2024-07-25 14:38:36.305632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.043 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.043 14:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:17.043 14:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.043 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.043 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 966319cc-a970-474d-867c-aeae2b2d15b7 00:11:17.043 14:38:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:17.043 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 966319CCA970474D867CAEAE2B2D15B7 -i 00:11:17.303 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bd6b583a-c7f4-4f2d-8371-5bacb30b3649 00:11:17.303 14:38:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:17.303 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BD6B583AC7F44F2D83715BACB30B3649 -i 00:11:17.563 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:17.563 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:17.823 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:17.823 14:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:18.083 nvme0n1 00:11:18.083 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:18.083 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:18.652 nvme1n2 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:18.652 14:38:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 966319cc-a970-474d-867c-aeae2b2d15b7 == \9\6\6\3\1\9\c\c\-\a\9\7\0\-\4\7\4\d\-\8\6\7\c\-\a\e\a\e\2\b\2\d\1\5\b\7 ]] 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bd6b583a-c7f4-4f2d-8371-5bacb30b3649 == \b\d\6\b\5\8\3\a\-\c\7\f\4\-\4\f\2\d\-\8\3\7\1\-\5\b\a\c\b\3\0\b\3\6\4\9 ]] 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2243890 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2243890 ']' 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2243890 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:18.911 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2243890 00:11:19.170 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:19.170 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:19.170 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2243890' 00:11:19.170 killing process with pid 2243890 00:11:19.170 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2243890 00:11:19.171 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2243890 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.430 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.690 rmmod nvme_tcp 00:11:19.690 rmmod nvme_fabrics 00:11:19.690 rmmod nvme_keyring 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2241871 ']' 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2241871 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2241871 ']' 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2241871 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2241871 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2241871' 00:11:19.690 killing process with pid 2241871 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2241871 00:11:19.690 14:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2241871 00:11:19.950 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.951 14:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.861 14:38:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.861 00:11:21.861 real 0m23.055s 00:11:21.861 user 0m24.596s 00:11:21.861 sys 0m6.090s 00:11:21.861 14:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.861 14:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:21.861 ************************************ 00:11:21.861 END TEST nvmf_ns_masking 00:11:21.861 ************************************ 00:11:21.861 14:38:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:21.861 14:38:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:21.861 14:38:42 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:21.861 14:38:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.861 14:38:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.861 14:38:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.122 ************************************ 00:11:22.122 START TEST nvmf_nvme_cli 00:11:22.122 ************************************ 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:22.122 * Looking for test storage... 00:11:22.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.122 14:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:27.433 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:27.433 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:27.433 Found net devices under 0000:86:00.0: cvl_0_0 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:27.433 Found net devices under 0000:86:00.1: cvl_0_1 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:27.433 00:11:27.433 --- 10.0.0.2 ping statistics --- 00:11:27.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.433 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:11:27.433 00:11:27.433 --- 10.0.0.1 ping statistics --- 00:11:27.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.433 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2247907 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2247907 00:11:27.433 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2247907 ']' 00:11:27.434 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.434 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.434 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.434 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.434 14:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.434 [2024-07-25 14:38:47.539717] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:27.434 [2024-07-25 14:38:47.539758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.434 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.434 [2024-07-25 14:38:47.596738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.434 [2024-07-25 14:38:47.677940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.434 [2024-07-25 14:38:47.677977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.434 [2024-07-25 14:38:47.677984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.434 [2024-07-25 14:38:47.677990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.434 [2024-07-25 14:38:47.677995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.434 [2024-07-25 14:38:47.678035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.434 [2024-07-25 14:38:47.678133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.434 [2024-07-25 14:38:47.678150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.434 [2024-07-25 14:38:47.678151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 [2024-07-25 14:38:48.401120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 Malloc0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 Malloc1 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 [2024-07-25 14:38:48.482849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:28.373 00:11:28.373 Discovery Log Number of Records 2, Generation counter 2 00:11:28.373 =====Discovery Log Entry 0====== 00:11:28.373 trtype: tcp 00:11:28.373 adrfam: ipv4 00:11:28.373 subtype: current discovery subsystem 00:11:28.373 treq: not required 00:11:28.373 portid: 0 00:11:28.373 trsvcid: 4420 00:11:28.373 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.373 traddr: 10.0.0.2 00:11:28.373 eflags: explicit discovery connections, duplicate discovery information 00:11:28.373 sectype: none 00:11:28.373 =====Discovery Log Entry 1====== 00:11:28.373 trtype: tcp 00:11:28.373 adrfam: ipv4 00:11:28.373 subtype: nvme subsystem 00:11:28.373 treq: not required 00:11:28.373 portid: 0 00:11:28.373 trsvcid: 4420 00:11:28.373 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.373 traddr: 10.0.0.2 00:11:28.373 eflags: none 00:11:28.373 sectype: none 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:28.373 14:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:29.756 14:38:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:31.667 /dev/nvme0n1 ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.667 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.928 14:38:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.928 rmmod nvme_tcp 00:11:31.928 rmmod nvme_fabrics 00:11:31.928 rmmod nvme_keyring 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2247907 ']' 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2247907 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2247907 ']' 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2247907 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247907 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247907' 00:11:31.928 killing process with pid 2247907 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2247907 00:11:31.928 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2247907 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.189 14:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.100 14:38:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.100 00:11:34.100 real 0m12.173s 00:11:34.100 user 0m19.728s 00:11:34.100 sys 0m4.516s 00:11:34.100 14:38:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.100 14:38:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.100 ************************************ 00:11:34.100 END TEST nvmf_nvme_cli 00:11:34.100 ************************************ 00:11:34.100 14:38:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:34.100 14:38:54 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:34.100 14:38:54 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:34.100 14:38:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:34.100 14:38:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.100 14:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.361 ************************************ 00:11:34.361 START TEST nvmf_vfio_user 00:11:34.361 ************************************ 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:34.361 * Looking for test storage... 00:11:34.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.361 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2249190 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2249190' 00:11:34.362 Process pid: 2249190 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2249190 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2249190 ']' 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 14:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:34.362 [2024-07-25 14:38:54.585386] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:34.362 [2024-07-25 14:38:54.585430] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.362 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.362 [2024-07-25 14:38:54.639972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.622 [2024-07-25 14:38:54.723895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.622 [2024-07-25 14:38:54.723927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.622 [2024-07-25 14:38:54.723934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.622 [2024-07-25 14:38:54.723940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.622 [2024-07-25 14:38:54.723946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.622 [2024-07-25 14:38:54.723996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.622 [2024-07-25 14:38:54.724013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.622 [2024-07-25 14:38:54.724029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.622 [2024-07-25 14:38:54.724031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.192 14:38:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.192 14:38:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:35.192 14:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:36.132 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:36.392 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:36.392 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:36.392 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.392 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:36.392 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.652 Malloc1 00:11:36.652 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:36.912 14:38:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:36.912 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:37.173 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:37.173 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:37.173 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:37.433 Malloc2 00:11:37.433 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:37.433 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:37.693 14:38:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:37.955 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:37.955 [2024-07-25 14:38:58.073707] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:37.955 [2024-07-25 14:38:58.073740] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249892 ] 00:11:37.955 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.955 [2024-07-25 14:38:58.103571] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:37.955 [2024-07-25 14:38:58.113409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:37.955 [2024-07-25 14:38:58.113428] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7febabc4c000 00:11:37.955 [2024-07-25 14:38:58.114409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.955 [2024-07-25 14:38:58.115407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.955 [2024-07-25 14:38:58.116406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.955 [2024-07-25 14:38:58.117418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.955 [2024-07-25 14:38:58.118417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.956 [2024-07-25 14:38:58.119430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.956 [2024-07-25 14:38:58.120445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.956 [2024-07-25 14:38:58.121443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.956 [2024-07-25 14:38:58.122447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:37.956 [2024-07-25 14:38:58.122456] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7febabc41000 00:11:37.956 [2024-07-25 14:38:58.123400] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:37.956 [2024-07-25 14:38:58.132000] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:37.956 [2024-07-25 14:38:58.132023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:37.956 [2024-07-25 14:38:58.136527] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:37.956 [2024-07-25 14:38:58.136564] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:37.956 [2024-07-25 14:38:58.136634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:37.956 [2024-07-25 14:38:58.136651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:37.956 [2024-07-25 14:38:58.136656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:37.956 [2024-07-25 14:38:58.137529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:37.956 [2024-07-25 14:38:58.137536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:37.956 [2024-07-25 14:38:58.137543] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:37.956 [2024-07-25 14:38:58.142049] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:37.956 [2024-07-25 14:38:58.142058] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:37.956 [2024-07-25 14:38:58.142064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.142556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:37.956 [2024-07-25 14:38:58.142564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.143561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:37.956 [2024-07-25 14:38:58.143568] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:37.956 [2024-07-25 14:38:58.143572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.143578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.143683] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:37.956 [2024-07-25 14:38:58.143689] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.143694] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:37.956 [2024-07-25 14:38:58.144569] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:37.956 [2024-07-25 14:38:58.145577] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:37.956 [2024-07-25 14:38:58.146582] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:37.956 [2024-07-25 14:38:58.147583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:37.956 [2024-07-25 14:38:58.147645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:37.956 [2024-07-25 14:38:58.148596] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:37.956 [2024-07-25 14:38:58.148604] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:37.956 [2024-07-25 14:38:58.148608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:37.956 [2024-07-25 14:38:58.148632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148645] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.956 [2024-07-25 14:38:58.148650] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.956 [2024-07-25 14:38:58.148662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.956 [2024-07-25 14:38:58.148700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:37.956 [2024-07-25 14:38:58.148709] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:37.956 [2024-07-25 14:38:58.148715] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:37.956 [2024-07-25 14:38:58.148719] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:37.956 [2024-07-25 14:38:58.148723] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:37.956 [2024-07-25 14:38:58.148727] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:37.956 [2024-07-25 14:38:58.148731] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:37.956 [2024-07-25 14:38:58.148735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:37.956 [2024-07-25 14:38:58.148766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:37.956 [2024-07-25 14:38:58.148872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.956 [2024-07-25 14:38:58.148880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.956 [2024-07-25 14:38:58.148887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.956 [2024-07-25 14:38:58.148895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.956 [2024-07-25 14:38:58.148899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:37.956 [2024-07-25 14:38:58.148923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:37.956 [2024-07-25 14:38:58.148928] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:37.956 [2024-07-25 14:38:58.148932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.148950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:37.956 [2024-07-25 14:38:58.148964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:37.956 [2024-07-25 14:38:58.149012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.149019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.149026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:37.956 [2024-07-25 14:38:58.149030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:37.956 [2024-07-25 14:38:58.149035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:37.956 [2024-07-25 14:38:58.149049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:37.956 [2024-07-25 14:38:58.149061] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:37.956 [2024-07-25 14:38:58.149069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.149076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:37.956 [2024-07-25 14:38:58.149082] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.956 [2024-07-25 14:38:58.149086] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.956 [2024-07-25 14:38:58.149093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149121] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149134] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.957 [2024-07-25 14:38:58.149138] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.957 [2024-07-25 14:38:58.149144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149193] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:37.957 [2024-07-25 14:38:58.149196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:37.957 [2024-07-25 14:38:58.149201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:37.957 [2024-07-25 14:38:58.149217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149299] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:37.957 [2024-07-25 14:38:58.149303] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:37.957 [2024-07-25 14:38:58.149306] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:37.957 [2024-07-25 14:38:58.149309] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:37.957 [2024-07-25 14:38:58.149315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:37.957 [2024-07-25 14:38:58.149321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:37.957 [2024-07-25 14:38:58.149325] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:37.957 [2024-07-25 14:38:58.149330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149336] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:37.957 [2024-07-25 14:38:58.149340] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.957 [2024-07-25 14:38:58.149345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149351] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:37.957 [2024-07-25 14:38:58.149355] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:37.957 [2024-07-25 14:38:58.149361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:37.957 [2024-07-25 14:38:58.149367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:37.957 [2024-07-25 14:38:58.149394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:37.957 ===================================================== 00:11:37.957 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:37.957 ===================================================== 00:11:37.957 Controller Capabilities/Features 00:11:37.957 ================================ 00:11:37.957 Vendor ID: 4e58 00:11:37.957 Subsystem Vendor ID: 4e58 00:11:37.957 Serial Number: SPDK1 00:11:37.957 Model Number: SPDK bdev Controller 00:11:37.957 Firmware Version: 24.09 00:11:37.957 Recommended Arb Burst: 6 00:11:37.957 IEEE OUI Identifier: 8d 6b 50 00:11:37.957 Multi-path I/O 00:11:37.957 May have multiple subsystem ports: Yes 00:11:37.957 May have multiple controllers: Yes 00:11:37.957 Associated with SR-IOV VF: No 00:11:37.957 Max Data Transfer Size: 131072 00:11:37.957 Max Number of Namespaces: 32 00:11:37.957 Max Number of I/O Queues: 127 00:11:37.957 NVMe Specification Version (VS): 1.3 00:11:37.957 NVMe Specification Version (Identify): 1.3 00:11:37.957 Maximum Queue Entries: 256 00:11:37.957 Contiguous Queues Required: Yes 00:11:37.957 Arbitration Mechanisms Supported 00:11:37.957 Weighted Round Robin: Not Supported 00:11:37.957 Vendor Specific: Not Supported 00:11:37.957 Reset Timeout: 15000 ms 00:11:37.957 Doorbell Stride: 4 bytes 00:11:37.957 NVM Subsystem Reset: Not Supported 00:11:37.957 Command Sets Supported 00:11:37.957 NVM Command Set: Supported 00:11:37.957 Boot Partition: Not Supported 00:11:37.957 Memory Page Size Minimum: 4096 bytes 00:11:37.957 Memory Page Size Maximum: 4096 bytes 00:11:37.957 Persistent Memory Region: Not Supported 00:11:37.957 Optional Asynchronous Events Supported 00:11:37.957 Namespace Attribute Notices: Supported 00:11:37.957 Firmware Activation Notices: Not Supported 00:11:37.957 ANA Change Notices: Not Supported 00:11:37.957 PLE Aggregate Log Change Notices: Not Supported 00:11:37.957 LBA Status Info Alert Notices: Not Supported 00:11:37.957 EGE Aggregate Log Change Notices: Not Supported 00:11:37.957 Normal NVM Subsystem Shutdown event: Not Supported 00:11:37.957 Zone Descriptor Change Notices: Not Supported 00:11:37.957 Discovery Log Change Notices: Not Supported 00:11:37.957 Controller Attributes 00:11:37.957 128-bit Host Identifier: Supported 00:11:37.957 Non-Operational Permissive Mode: Not Supported 00:11:37.957 NVM Sets: Not Supported 00:11:37.957 Read Recovery Levels: Not Supported 00:11:37.957 Endurance Groups: Not Supported 00:11:37.957 Predictable Latency Mode: Not Supported 00:11:37.957 Traffic Based Keep ALive: Not Supported 00:11:37.957 Namespace Granularity: Not Supported 00:11:37.957 SQ Associations: Not Supported 00:11:37.957 UUID List: Not Supported 00:11:37.957 Multi-Domain Subsystem: Not Supported 00:11:37.957 Fixed Capacity Management: Not Supported 00:11:37.957 Variable Capacity Management: Not Supported 00:11:37.957 Delete Endurance Group: Not Supported 00:11:37.957 Delete NVM Set: Not Supported 00:11:37.957 Extended LBA Formats Supported: Not Supported 00:11:37.957 Flexible Data Placement Supported: Not Supported 00:11:37.957 00:11:37.957 Controller Memory Buffer Support 00:11:37.957 ================================ 00:11:37.957 Supported: No 00:11:37.957 00:11:37.957 Persistent Memory Region Support 00:11:37.957 ================================ 00:11:37.957 Supported: No 00:11:37.957 00:11:37.957 Admin Command Set Attributes 00:11:37.957 ============================ 00:11:37.957 Security Send/Receive: Not Supported 00:11:37.957 Format NVM: Not Supported 00:11:37.957 Firmware Activate/Download: Not Supported 00:11:37.957 Namespace Management: Not Supported 00:11:37.957 Device Self-Test: Not Supported 00:11:37.957 Directives: Not Supported 00:11:37.957 NVMe-MI: Not Supported 00:11:37.957 Virtualization Management: Not Supported 00:11:37.957 Doorbell Buffer Config: Not Supported 00:11:37.957 Get LBA Status Capability: Not Supported 00:11:37.957 Command & Feature Lockdown Capability: Not Supported 00:11:37.957 Abort Command Limit: 4 00:11:37.957 Async Event Request Limit: 4 00:11:37.957 Number of Firmware Slots: N/A 00:11:37.958 Firmware Slot 1 Read-Only: N/A 00:11:37.958 Firmware Activation Without Reset: N/A 00:11:37.958 Multiple Update Detection Support: N/A 00:11:37.958 Firmware Update Granularity: No Information Provided 00:11:37.958 Per-Namespace SMART Log: No 00:11:37.958 Asymmetric Namespace Access Log Page: Not Supported 00:11:37.958 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:37.958 Command Effects Log Page: Supported 00:11:37.958 Get Log Page Extended Data: Supported 00:11:37.958 Telemetry Log Pages: Not Supported 00:11:37.958 Persistent Event Log Pages: Not Supported 00:11:37.958 Supported Log Pages Log Page: May Support 00:11:37.958 Commands Supported & Effects Log Page: Not Supported 00:11:37.958 Feature Identifiers & Effects Log Page:May Support 00:11:37.958 NVMe-MI Commands & Effects Log Page: May Support 00:11:37.958 Data Area 4 for Telemetry Log: Not Supported 00:11:37.958 Error Log Page Entries Supported: 128 00:11:37.958 Keep Alive: Supported 00:11:37.958 Keep Alive Granularity: 10000 ms 00:11:37.958 00:11:37.958 NVM Command Set Attributes 00:11:37.958 ========================== 00:11:37.958 Submission Queue Entry Size 00:11:37.958 Max: 64 00:11:37.958 Min: 64 00:11:37.958 Completion Queue Entry Size 00:11:37.958 Max: 16 00:11:37.958 Min: 16 00:11:37.958 Number of Namespaces: 32 00:11:37.958 Compare Command: Supported 00:11:37.958 Write Uncorrectable Command: Not Supported 00:11:37.958 Dataset Management Command: Supported 00:11:37.958 Write Zeroes Command: Supported 00:11:37.958 Set Features Save Field: Not Supported 00:11:37.958 Reservations: Not Supported 00:11:37.958 Timestamp: Not Supported 00:11:37.958 Copy: Supported 00:11:37.958 Volatile Write Cache: Present 00:11:37.958 Atomic Write Unit (Normal): 1 00:11:37.958 Atomic Write Unit (PFail): 1 00:11:37.958 Atomic Compare & Write Unit: 1 00:11:37.958 Fused Compare & Write: Supported 00:11:37.958 Scatter-Gather List 00:11:37.958 SGL Command Set: Supported (Dword aligned) 00:11:37.958 SGL Keyed: Not Supported 00:11:37.958 SGL Bit Bucket Descriptor: Not Supported 00:11:37.958 SGL Metadata Pointer: Not Supported 00:11:37.958 Oversized SGL: Not Supported 00:11:37.958 SGL Metadata Address: Not Supported 00:11:37.958 SGL Offset: Not Supported 00:11:37.958 Transport SGL Data Block: Not Supported 00:11:37.958 Replay Protected Memory Block: Not Supported 00:11:37.958 00:11:37.958 Firmware Slot Information 00:11:37.958 ========================= 00:11:37.958 Active slot: 1 00:11:37.958 Slot 1 Firmware Revision: 24.09 00:11:37.958 00:11:37.958 00:11:37.958 Commands Supported and Effects 00:11:37.958 ============================== 00:11:37.958 Admin Commands 00:11:37.958 -------------- 00:11:37.958 Get Log Page (02h): Supported 00:11:37.958 Identify (06h): Supported 00:11:37.958 Abort (08h): Supported 00:11:37.958 Set Features (09h): Supported 00:11:37.958 Get Features (0Ah): Supported 00:11:37.958 Asynchronous Event Request (0Ch): Supported 00:11:37.958 Keep Alive (18h): Supported 00:11:37.958 I/O Commands 00:11:37.958 ------------ 00:11:37.958 Flush (00h): Supported LBA-Change 00:11:37.958 Write (01h): Supported LBA-Change 00:11:37.958 Read (02h): Supported 00:11:37.958 Compare (05h): Supported 00:11:37.958 Write Zeroes (08h): Supported LBA-Change 00:11:37.958 Dataset Management (09h): Supported LBA-Change 00:11:37.958 Copy (19h): Supported LBA-Change 00:11:37.958 00:11:37.958 Error Log 00:11:37.958 ========= 00:11:37.958 00:11:37.958 Arbitration 00:11:37.958 =========== 00:11:37.958 Arbitration Burst: 1 00:11:37.958 00:11:37.958 Power Management 00:11:37.958 ================ 00:11:37.958 Number of Power States: 1 00:11:37.958 Current Power State: Power State #0 00:11:37.958 Power State #0: 00:11:37.958 Max Power: 0.00 W 00:11:37.958 Non-Operational State: Operational 00:11:37.958 Entry Latency: Not Reported 00:11:37.958 Exit Latency: Not Reported 00:11:37.958 Relative Read Throughput: 0 00:11:37.958 Relative Read Latency: 0 00:11:37.958 Relative Write Throughput: 0 00:11:37.958 Relative Write Latency: 0 00:11:37.958 Idle Power: Not Reported 00:11:37.958 Active Power: Not Reported 00:11:37.958 Non-Operational Permissive Mode: Not Supported 00:11:37.958 00:11:37.958 Health Information 00:11:37.958 ================== 00:11:37.958 Critical Warnings: 00:11:37.958 Available Spare Space: OK 00:11:37.958 Temperature: OK 00:11:37.958 Device Reliability: OK 00:11:37.958 Read Only: No 00:11:37.958 Volatile Memory Backup: OK 00:11:37.958 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:37.958 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:37.958 Available Spare: 0% 00:11:37.958 Available Sp[2024-07-25 14:38:58.149479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:37.958 [2024-07-25 14:38:58.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:37.958 [2024-07-25 14:38:58.149513] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:37.958 [2024-07-25 14:38:58.149521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.958 [2024-07-25 14:38:58.149527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.958 [2024-07-25 14:38:58.149532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.958 [2024-07-25 14:38:58.149538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.958 [2024-07-25 14:38:58.149600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:37.958 [2024-07-25 14:38:58.149609] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:37.958 [2024-07-25 14:38:58.150605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:37.958 [2024-07-25 14:38:58.150651] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:37.958 [2024-07-25 14:38:58.150657] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:37.958 [2024-07-25 14:38:58.151614] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:37.958 [2024-07-25 14:38:58.151624] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:37.958 [2024-07-25 14:38:58.151672] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:37.958 [2024-07-25 14:38:58.153646] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:37.958 are Threshold: 0% 00:11:37.958 Life Percentage Used: 0% 00:11:37.958 Data Units Read: 0 00:11:37.958 Data Units Written: 0 00:11:37.958 Host Read Commands: 0 00:11:37.958 Host Write Commands: 0 00:11:37.958 Controller Busy Time: 0 minutes 00:11:37.958 Power Cycles: 0 00:11:37.958 Power On Hours: 0 hours 00:11:37.958 Unsafe Shutdowns: 0 00:11:37.958 Unrecoverable Media Errors: 0 00:11:37.958 Lifetime Error Log Entries: 0 00:11:37.958 Warning Temperature Time: 0 minutes 00:11:37.958 Critical Temperature Time: 0 minutes 00:11:37.958 00:11:37.958 Number of Queues 00:11:37.958 ================ 00:11:37.958 Number of I/O Submission Queues: 127 00:11:37.958 Number of I/O Completion Queues: 127 00:11:37.958 00:11:37.958 Active Namespaces 00:11:37.958 ================= 00:11:37.958 Namespace ID:1 00:11:37.958 Error Recovery Timeout: Unlimited 00:11:37.958 Command Set Identifier: NVM (00h) 00:11:37.958 Deallocate: Supported 00:11:37.958 Deallocated/Unwritten Error: Not Supported 00:11:37.958 Deallocated Read Value: Unknown 00:11:37.958 Deallocate in Write Zeroes: Not Supported 00:11:37.958 Deallocated Guard Field: 0xFFFF 00:11:37.958 Flush: Supported 00:11:37.958 Reservation: Supported 00:11:37.958 Namespace Sharing Capabilities: Multiple Controllers 00:11:37.958 Size (in LBAs): 131072 (0GiB) 00:11:37.958 Capacity (in LBAs): 131072 (0GiB) 00:11:37.958 Utilization (in LBAs): 131072 (0GiB) 00:11:37.958 NGUID: 56C564D558244ED3B8FB31DF8E66CFFE 00:11:37.958 UUID: 56c564d5-5824-4ed3-b8fb-31df8e66cffe 00:11:37.958 Thin Provisioning: Not Supported 00:11:37.958 Per-NS Atomic Units: Yes 00:11:37.958 Atomic Boundary Size (Normal): 0 00:11:37.958 Atomic Boundary Size (PFail): 0 00:11:37.958 Atomic Boundary Offset: 0 00:11:37.958 Maximum Single Source Range Length: 65535 00:11:37.959 Maximum Copy Length: 65535 00:11:37.959 Maximum Source Range Count: 1 00:11:37.959 NGUID/EUI64 Never Reused: No 00:11:37.959 Namespace Write Protected: No 00:11:37.959 Number of LBA Formats: 1 00:11:37.959 Current LBA Format: LBA Format #00 00:11:37.959 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:37.959 00:11:37.959 14:38:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:37.959 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.219 [2024-07-25 14:38:58.367792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:43.514 Initializing NVMe Controllers 00:11:43.514 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:43.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:43.514 Initialization complete. Launching workers. 00:11:43.514 ======================================================== 00:11:43.514 Latency(us) 00:11:43.514 Device Information : IOPS MiB/s Average min max 00:11:43.514 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39830.13 155.59 3213.23 962.84 10614.19 00:11:43.514 ======================================================== 00:11:43.514 Total : 39830.13 155.59 3213.23 962.84 10614.19 00:11:43.514 00:11:43.514 [2024-07-25 14:39:03.388910] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:43.514 14:39:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:43.514 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.514 [2024-07-25 14:39:03.610934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:48.828 Initializing NVMe Controllers 00:11:48.828 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.828 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:48.828 Initialization complete. Launching workers. 00:11:48.828 ======================================================== 00:11:48.828 Latency(us) 00:11:48.828 Device Information : IOPS MiB/s Average min max 00:11:48.828 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.09 62.70 7979.86 7775.77 8052.87 00:11:48.828 ======================================================== 00:11:48.828 Total : 16051.09 62.70 7979.86 7775.77 8052.87 00:11:48.828 00:11:48.828 [2024-07-25 14:39:08.653902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:48.828 14:39:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:48.828 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.828 [2024-07-25 14:39:08.848884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.114 [2024-07-25 14:39:13.939430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.114 Initializing NVMe Controllers 00:11:54.114 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:54.114 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:54.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:54.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:54.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:54.114 Initialization complete. Launching workers. 00:11:54.114 Starting thread on core 2 00:11:54.114 Starting thread on core 3 00:11:54.114 Starting thread on core 1 00:11:54.114 14:39:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:54.114 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.114 [2024-07-25 14:39:14.224468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.412 [2024-07-25 14:39:17.288065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.412 Initializing NVMe Controllers 00:11:57.412 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.412 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.412 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:57.412 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:57.412 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:57.412 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:57.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:57.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:57.412 Initialization complete. Launching workers. 00:11:57.412 Starting thread on core 1 with urgent priority queue 00:11:57.412 Starting thread on core 2 with urgent priority queue 00:11:57.412 Starting thread on core 3 with urgent priority queue 00:11:57.412 Starting thread on core 0 with urgent priority queue 00:11:57.412 SPDK bdev Controller (SPDK1 ) core 0: 9566.00 IO/s 10.45 secs/100000 ios 00:11:57.412 SPDK bdev Controller (SPDK1 ) core 1: 7721.33 IO/s 12.95 secs/100000 ios 00:11:57.412 SPDK bdev Controller (SPDK1 ) core 2: 10260.67 IO/s 9.75 secs/100000 ios 00:11:57.412 SPDK bdev Controller (SPDK1 ) core 3: 7906.00 IO/s 12.65 secs/100000 ios 00:11:57.412 ======================================================== 00:11:57.412 00:11:57.412 14:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:57.412 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.412 [2024-07-25 14:39:17.563550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.412 Initializing NVMe Controllers 00:11:57.412 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.412 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.412 Namespace ID: 1 size: 0GB 00:11:57.412 Initialization complete. 00:11:57.412 INFO: using host memory buffer for IO 00:11:57.412 Hello world! 00:11:57.412 [2024-07-25 14:39:17.597784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.412 14:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:57.412 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.672 [2024-07-25 14:39:17.858480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.613 Initializing NVMe Controllers 00:11:58.613 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.613 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.613 Initialization complete. Launching workers. 00:11:58.613 submit (in ns) avg, min, max = 9114.0, 3283.5, 3999261.7 00:11:58.613 complete (in ns) avg, min, max = 20867.8, 1838.3, 3998767.0 00:11:58.613 00:11:58.613 Submit histogram 00:11:58.613 ================ 00:11:58.613 Range in us Cumulative Count 00:11:58.613 3.283 - 3.297: 0.0185% ( 3) 00:11:58.613 3.297 - 3.311: 0.0926% ( 12) 00:11:58.613 3.311 - 3.325: 0.2530% ( 26) 00:11:58.613 3.325 - 3.339: 1.2340% ( 159) 00:11:58.613 3.339 - 3.353: 4.4240% ( 517) 00:11:58.613 3.353 - 3.367: 9.6933% ( 854) 00:11:58.613 3.367 - 3.381: 15.5303% ( 946) 00:11:58.613 3.381 - 3.395: 21.5709% ( 979) 00:11:58.613 3.395 - 3.409: 27.5498% ( 969) 00:11:58.613 3.409 - 3.423: 32.9055% ( 868) 00:11:58.613 3.423 - 3.437: 38.4278% ( 895) 00:11:58.613 3.437 - 3.450: 43.8946% ( 886) 00:11:58.613 3.450 - 3.464: 48.2754% ( 710) 00:11:58.613 3.464 - 3.478: 51.9590% ( 597) 00:11:58.613 3.478 - 3.492: 57.4505% ( 890) 00:11:58.613 3.492 - 3.506: 64.6572% ( 1168) 00:11:58.613 3.506 - 3.520: 69.2849% ( 750) 00:11:58.613 3.520 - 3.534: 73.6040% ( 700) 00:11:58.613 3.534 - 3.548: 78.6203% ( 813) 00:11:58.613 3.548 - 3.562: 82.5137% ( 631) 00:11:58.613 3.562 - 3.590: 86.2775% ( 610) 00:11:58.613 3.590 - 3.617: 87.3388% ( 172) 00:11:58.613 3.617 - 3.645: 88.2273% ( 144) 00:11:58.613 3.645 - 3.673: 89.8130% ( 257) 00:11:58.613 3.673 - 3.701: 91.5160% ( 276) 00:11:58.613 3.701 - 3.729: 93.1820% ( 270) 00:11:58.613 3.729 - 3.757: 94.9960% ( 294) 00:11:58.613 3.757 - 3.784: 96.7298% ( 281) 00:11:58.613 3.784 - 3.812: 97.9268% ( 194) 00:11:58.613 3.812 - 3.840: 98.6919% ( 124) 00:11:58.613 3.840 - 3.868: 99.2040% ( 83) 00:11:58.613 3.868 - 3.896: 99.4262% ( 36) 00:11:58.613 3.896 - 3.923: 99.5064% ( 13) 00:11:58.613 3.923 - 3.951: 99.5311% ( 4) 00:11:58.613 3.951 - 3.979: 99.5496% ( 3) 00:11:58.613 4.035 - 4.063: 99.5557% ( 1) 00:11:58.613 5.009 - 5.037: 99.5619% ( 1) 00:11:58.613 5.037 - 5.064: 99.5681% ( 1) 00:11:58.613 5.176 - 5.203: 99.5743% ( 1) 00:11:58.613 5.231 - 5.259: 99.5804% ( 1) 00:11:58.613 5.259 - 5.287: 99.5866% ( 1) 00:11:58.613 5.287 - 5.315: 99.5989% ( 2) 00:11:58.613 5.315 - 5.343: 99.6051% ( 1) 00:11:58.613 5.537 - 5.565: 99.6113% ( 1) 00:11:58.613 5.565 - 5.593: 99.6174% ( 1) 00:11:58.613 5.649 - 5.677: 99.6236% ( 1) 00:11:58.613 5.677 - 5.704: 99.6421% ( 3) 00:11:58.613 5.704 - 5.732: 99.6483% ( 1) 00:11:58.613 5.816 - 5.843: 99.6545% ( 1) 00:11:58.613 5.843 - 5.871: 99.6606% ( 1) 00:11:58.613 5.899 - 5.927: 99.6668% ( 1) 00:11:58.613 5.983 - 6.010: 99.6792% ( 2) 00:11:58.613 6.038 - 6.066: 99.6915% ( 2) 00:11:58.613 6.066 - 6.094: 99.6977% ( 1) 00:11:58.613 6.150 - 6.177: 99.7162% ( 3) 00:11:58.613 6.205 - 6.233: 99.7285% ( 2) 00:11:58.613 6.317 - 6.344: 99.7347% ( 1) 00:11:58.613 6.456 - 6.483: 99.7409% ( 1) 00:11:58.613 6.790 - 6.817: 99.7470% ( 1) 00:11:58.613 6.929 - 6.957: 99.7532% ( 1) 00:11:58.613 6.984 - 7.012: 99.7594% ( 1) 00:11:58.613 7.096 - 7.123: 99.7655% ( 1) 00:11:58.613 7.123 - 7.179: 99.7717% ( 1) 00:11:58.613 7.179 - 7.235: 99.7840% ( 2) 00:11:58.614 7.235 - 7.290: 99.7964% ( 2) 00:11:58.614 7.402 - 7.457: 99.8149% ( 3) 00:11:58.614 7.569 - 7.624: 99.8211% ( 1) 00:11:58.614 7.624 - 7.680: 99.8272% ( 1) 00:11:58.614 7.791 - 7.847: 99.8334% ( 1) 00:11:58.614 7.903 - 7.958: 99.8396% ( 1) 00:11:58.614 7.958 - 8.014: 99.8457% ( 1) 00:11:58.614 11.019 - 11.075: 99.8519% ( 1) 00:11:58.614 14.581 - 14.692: 99.8581% ( 1) 00:11:58.614 3148.577 - 3162.824: 99.8643% ( 1) 00:11:58.614 3989.148 - 4017.642: 100.0000% ( 22) 00:11:58.614 00:11:58.614 Complete histogram 00:11:58.614 ================== 00:11:58.614 Range in us Cumulative Count 00:11:58.614 1.837 - 1.850: 0.5060% ( 82) 00:11:58.614 1.850 - 1.864: 7.6078% ( 1151) 00:11:58.614 1.864 - [2024-07-25 14:39:18.876413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.874 1.878: 31.9060% ( 3938) 00:11:58.874 1.878 - 1.892: 77.3493% ( 7365) 00:11:58.874 1.892 - 1.906: 91.3432% ( 2268) 00:11:58.874 1.906 - 1.920: 94.1877% ( 461) 00:11:58.874 1.920 - 1.934: 95.8413% ( 268) 00:11:58.874 1.934 - 1.948: 96.9087% ( 173) 00:11:58.874 1.948 - 1.962: 98.0996% ( 193) 00:11:58.874 1.962 - 1.976: 98.8400% ( 120) 00:11:58.874 1.976 - 1.990: 99.1300% ( 47) 00:11:58.874 1.990 - 2.003: 99.2534% ( 20) 00:11:58.874 2.003 - 2.017: 99.3089% ( 9) 00:11:58.874 2.017 - 2.031: 99.3151% ( 1) 00:11:58.874 2.031 - 2.045: 99.3275% ( 2) 00:11:58.874 2.045 - 2.059: 99.3521% ( 4) 00:11:58.874 2.059 - 2.073: 99.3583% ( 1) 00:11:58.874 2.073 - 2.087: 99.3645% ( 1) 00:11:58.874 2.212 - 2.226: 99.3706% ( 1) 00:11:58.874 3.617 - 3.645: 99.3768% ( 1) 00:11:58.874 3.701 - 3.729: 99.3830% ( 1) 00:11:58.874 3.896 - 3.923: 99.3892% ( 1) 00:11:58.874 4.035 - 4.063: 99.3953% ( 1) 00:11:58.874 4.090 - 4.118: 99.4015% ( 1) 00:11:58.874 4.146 - 4.174: 99.4077% ( 1) 00:11:58.874 4.202 - 4.230: 99.4138% ( 1) 00:11:58.874 4.257 - 4.285: 99.4200% ( 1) 00:11:58.874 4.313 - 4.341: 99.4323% ( 2) 00:11:58.874 4.452 - 4.480: 99.4385% ( 1) 00:11:58.874 4.508 - 4.536: 99.4509% ( 2) 00:11:58.874 4.703 - 4.730: 99.4570% ( 1) 00:11:58.874 4.786 - 4.814: 99.4632% ( 1) 00:11:58.874 4.814 - 4.842: 99.4755% ( 2) 00:11:58.874 4.897 - 4.925: 99.4817% ( 1) 00:11:58.874 5.009 - 5.037: 99.4879% ( 1) 00:11:58.874 5.092 - 5.120: 99.4940% ( 1) 00:11:58.874 5.176 - 5.203: 99.5002% ( 1) 00:11:58.874 5.287 - 5.315: 99.5064% ( 1) 00:11:58.874 5.537 - 5.565: 99.5126% ( 1) 00:11:58.874 5.704 - 5.732: 99.5187% ( 1) 00:11:58.874 6.205 - 6.233: 99.5249% ( 1) 00:11:58.874 3989.148 - 4017.642: 100.0000% ( 77) 00:11:58.874 00:11:58.874 14:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:58.874 14:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:58.874 14:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:58.874 14:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:58.874 14:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.874 [ 00:11:58.874 { 00:11:58.874 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.874 "subtype": "Discovery", 00:11:58.874 "listen_addresses": [], 00:11:58.874 "allow_any_host": true, 00:11:58.874 "hosts": [] 00:11:58.874 }, 00:11:58.874 { 00:11:58.874 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.874 "subtype": "NVMe", 00:11:58.874 "listen_addresses": [ 00:11:58.874 { 00:11:58.874 "trtype": "VFIOUSER", 00:11:58.874 "adrfam": "IPv4", 00:11:58.874 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.874 "trsvcid": "0" 00:11:58.874 } 00:11:58.874 ], 00:11:58.874 "allow_any_host": true, 00:11:58.874 "hosts": [], 00:11:58.874 "serial_number": "SPDK1", 00:11:58.874 "model_number": "SPDK bdev Controller", 00:11:58.874 "max_namespaces": 32, 00:11:58.874 "min_cntlid": 1, 00:11:58.874 "max_cntlid": 65519, 00:11:58.874 "namespaces": [ 00:11:58.874 { 00:11:58.874 "nsid": 1, 00:11:58.874 "bdev_name": "Malloc1", 00:11:58.874 "name": "Malloc1", 00:11:58.874 "nguid": "56C564D558244ED3B8FB31DF8E66CFFE", 00:11:58.874 "uuid": "56c564d5-5824-4ed3-b8fb-31df8e66cffe" 00:11:58.874 } 00:11:58.874 ] 00:11:58.874 }, 00:11:58.874 { 00:11:58.874 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.874 "subtype": "NVMe", 00:11:58.874 "listen_addresses": [ 00:11:58.874 { 00:11:58.874 "trtype": "VFIOUSER", 00:11:58.874 "adrfam": "IPv4", 00:11:58.874 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.874 "trsvcid": "0" 00:11:58.874 } 00:11:58.874 ], 00:11:58.874 "allow_any_host": true, 00:11:58.874 "hosts": [], 00:11:58.874 "serial_number": "SPDK2", 00:11:58.874 "model_number": "SPDK bdev Controller", 00:11:58.874 "max_namespaces": 32, 00:11:58.874 "min_cntlid": 1, 00:11:58.874 "max_cntlid": 65519, 00:11:58.874 "namespaces": [ 00:11:58.874 { 00:11:58.874 "nsid": 1, 00:11:58.875 "bdev_name": "Malloc2", 00:11:58.875 "name": "Malloc2", 00:11:58.875 "nguid": "49FF0A445420458F8AE6A6192E1FB031", 00:11:58.875 "uuid": "49ff0a44-5420-458f-8ae6-a6192e1fb031" 00:11:58.875 } 00:11:58.875 ] 00:11:58.875 } 00:11:58.875 ] 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2253865 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:58.875 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:58.875 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.135 [2024-07-25 14:39:19.248486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.135 Malloc3 00:11:59.135 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:59.395 [2024-07-25 14:39:19.458028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:59.395 Asynchronous Event Request test 00:11:59.395 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.395 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.395 Registering asynchronous event callbacks... 00:11:59.395 Starting namespace attribute notice tests for all controllers... 00:11:59.395 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:59.395 aer_cb - Changed Namespace 00:11:59.395 Cleaning up... 00:11:59.395 [ 00:11:59.395 { 00:11:59.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:59.395 "subtype": "Discovery", 00:11:59.395 "listen_addresses": [], 00:11:59.395 "allow_any_host": true, 00:11:59.395 "hosts": [] 00:11:59.395 }, 00:11:59.395 { 00:11:59.395 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:59.395 "subtype": "NVMe", 00:11:59.395 "listen_addresses": [ 00:11:59.395 { 00:11:59.395 "trtype": "VFIOUSER", 00:11:59.395 "adrfam": "IPv4", 00:11:59.395 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:59.395 "trsvcid": "0" 00:11:59.395 } 00:11:59.395 ], 00:11:59.395 "allow_any_host": true, 00:11:59.395 "hosts": [], 00:11:59.395 "serial_number": "SPDK1", 00:11:59.395 "model_number": "SPDK bdev Controller", 00:11:59.395 "max_namespaces": 32, 00:11:59.395 "min_cntlid": 1, 00:11:59.395 "max_cntlid": 65519, 00:11:59.395 "namespaces": [ 00:11:59.395 { 00:11:59.395 "nsid": 1, 00:11:59.395 "bdev_name": "Malloc1", 00:11:59.395 "name": "Malloc1", 00:11:59.395 "nguid": "56C564D558244ED3B8FB31DF8E66CFFE", 00:11:59.395 "uuid": "56c564d5-5824-4ed3-b8fb-31df8e66cffe" 00:11:59.395 }, 00:11:59.395 { 00:11:59.395 "nsid": 2, 00:11:59.395 "bdev_name": "Malloc3", 00:11:59.395 "name": "Malloc3", 00:11:59.395 "nguid": "73C401DE369A4C75BFC9B889ADC5DBF4", 00:11:59.395 "uuid": "73c401de-369a-4c75-bfc9-b889adc5dbf4" 00:11:59.395 } 00:11:59.395 ] 00:11:59.395 }, 00:11:59.395 { 00:11:59.395 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:59.395 "subtype": "NVMe", 00:11:59.395 "listen_addresses": [ 00:11:59.395 { 00:11:59.395 "trtype": "VFIOUSER", 00:11:59.395 "adrfam": "IPv4", 00:11:59.395 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:59.395 "trsvcid": "0" 00:11:59.395 } 00:11:59.395 ], 00:11:59.395 "allow_any_host": true, 00:11:59.395 "hosts": [], 00:11:59.395 "serial_number": "SPDK2", 00:11:59.395 "model_number": "SPDK bdev Controller", 00:11:59.395 "max_namespaces": 32, 00:11:59.395 "min_cntlid": 1, 00:11:59.395 "max_cntlid": 65519, 00:11:59.395 "namespaces": [ 00:11:59.395 { 00:11:59.395 "nsid": 1, 00:11:59.395 "bdev_name": "Malloc2", 00:11:59.395 "name": "Malloc2", 00:11:59.395 "nguid": "49FF0A445420458F8AE6A6192E1FB031", 00:11:59.395 "uuid": "49ff0a44-5420-458f-8ae6-a6192e1fb031" 00:11:59.395 } 00:11:59.395 ] 00:11:59.395 } 00:11:59.395 ] 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2253865 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:59.395 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:59.657 [2024-07-25 14:39:19.690340] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:11:59.657 [2024-07-25 14:39:19.690371] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253881 ] 00:11:59.657 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.657 [2024-07-25 14:39:19.718425] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:59.657 [2024-07-25 14:39:19.721637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.657 [2024-07-25 14:39:19.721655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7effc843c000 00:11:59.657 [2024-07-25 14:39:19.722638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.723648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.724650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.725660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.726660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.727666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.728671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.729682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.657 [2024-07-25 14:39:19.730693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.657 [2024-07-25 14:39:19.730703] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7effc8431000 00:11:59.657 [2024-07-25 14:39:19.731641] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.657 [2024-07-25 14:39:19.743155] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:59.657 [2024-07-25 14:39:19.743176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:59.657 [2024-07-25 14:39:19.748262] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:59.657 [2024-07-25 14:39:19.748296] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:59.657 [2024-07-25 14:39:19.748359] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:59.657 [2024-07-25 14:39:19.748374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:59.657 [2024-07-25 14:39:19.748379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:59.657 [2024-07-25 14:39:19.749266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:59.657 [2024-07-25 14:39:19.749276] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:59.657 [2024-07-25 14:39:19.749282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:59.657 [2024-07-25 14:39:19.750273] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:59.657 [2024-07-25 14:39:19.750281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:59.657 [2024-07-25 14:39:19.750287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.751284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:59.657 [2024-07-25 14:39:19.751292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.752292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:59.657 [2024-07-25 14:39:19.752301] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:59.657 [2024-07-25 14:39:19.752308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.752314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.752419] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:59.657 [2024-07-25 14:39:19.752423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.752427] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:59.657 [2024-07-25 14:39:19.753287] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:59.657 [2024-07-25 14:39:19.754302] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:59.657 [2024-07-25 14:39:19.755309] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:59.657 [2024-07-25 14:39:19.756316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:59.657 [2024-07-25 14:39:19.756353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:59.657 [2024-07-25 14:39:19.757326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:59.657 [2024-07-25 14:39:19.757334] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:59.657 [2024-07-25 14:39:19.757338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:59.657 [2024-07-25 14:39:19.757355] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:59.657 [2024-07-25 14:39:19.757362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.757373] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.658 [2024-07-25 14:39:19.757377] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.658 [2024-07-25 14:39:19.757387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.765050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.765060] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:59.658 [2024-07-25 14:39:19.765066] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:59.658 [2024-07-25 14:39:19.765070] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:59.658 [2024-07-25 14:39:19.765074] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:59.658 [2024-07-25 14:39:19.765078] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:59.658 [2024-07-25 14:39:19.765082] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:59.658 [2024-07-25 14:39:19.765088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.765094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.765104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.773048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.773062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.658 [2024-07-25 14:39:19.773070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.658 [2024-07-25 14:39:19.773077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.658 [2024-07-25 14:39:19.773085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.658 [2024-07-25 14:39:19.773089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.773096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.773105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.781049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.781056] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:59.658 [2024-07-25 14:39:19.781061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.781066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.781071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.781079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.789047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.789098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.789105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.789112] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:59.658 [2024-07-25 14:39:19.789116] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:59.658 [2024-07-25 14:39:19.789122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.797047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.797061] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:59.658 [2024-07-25 14:39:19.797070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.797077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.797083] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.658 [2024-07-25 14:39:19.797087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.658 [2024-07-25 14:39:19.797093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.805048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.805062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.805069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.805078] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.658 [2024-07-25 14:39:19.805082] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.658 [2024-07-25 14:39:19.805088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.813050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.813060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813093] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:59.658 [2024-07-25 14:39:19.813097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:59.658 [2024-07-25 14:39:19.813102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:59.658 [2024-07-25 14:39:19.813117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.821049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.821064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.829050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.829066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.837050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.837063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.845047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:59.658 [2024-07-25 14:39:19.845065] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:59.658 [2024-07-25 14:39:19.845070] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:59.658 [2024-07-25 14:39:19.845073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:59.658 [2024-07-25 14:39:19.845075] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:59.658 [2024-07-25 14:39:19.845081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:59.658 [2024-07-25 14:39:19.845088] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:59.658 [2024-07-25 14:39:19.845093] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:59.658 [2024-07-25 14:39:19.845099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.845107] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:59.658 [2024-07-25 14:39:19.845110] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.658 [2024-07-25 14:39:19.845116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.658 [2024-07-25 14:39:19.845122] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:59.658 [2024-07-25 14:39:19.845126] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:59.658 [2024-07-25 14:39:19.845131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:59.659 [2024-07-25 14:39:19.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:59.659 [2024-07-25 14:39:19.853066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:59.659 [2024-07-25 14:39:19.853076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:59.659 [2024-07-25 14:39:19.853082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:59.659 ===================================================== 00:11:59.659 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:59.659 ===================================================== 00:11:59.659 Controller Capabilities/Features 00:11:59.659 ================================ 00:11:59.659 Vendor ID: 4e58 00:11:59.659 Subsystem Vendor ID: 4e58 00:11:59.659 Serial Number: SPDK2 00:11:59.659 Model Number: SPDK bdev Controller 00:11:59.659 Firmware Version: 24.09 00:11:59.659 Recommended Arb Burst: 6 00:11:59.659 IEEE OUI Identifier: 8d 6b 50 00:11:59.659 Multi-path I/O 00:11:59.659 May have multiple subsystem ports: Yes 00:11:59.659 May have multiple controllers: Yes 00:11:59.659 Associated with SR-IOV VF: No 00:11:59.659 Max Data Transfer Size: 131072 00:11:59.659 Max Number of Namespaces: 32 00:11:59.659 Max Number of I/O Queues: 127 00:11:59.659 NVMe Specification Version (VS): 1.3 00:11:59.659 NVMe Specification Version (Identify): 1.3 00:11:59.659 Maximum Queue Entries: 256 00:11:59.659 Contiguous Queues Required: Yes 00:11:59.659 Arbitration Mechanisms Supported 00:11:59.659 Weighted Round Robin: Not Supported 00:11:59.659 Vendor Specific: Not Supported 00:11:59.659 Reset Timeout: 15000 ms 00:11:59.659 Doorbell Stride: 4 bytes 00:11:59.659 NVM Subsystem Reset: Not Supported 00:11:59.659 Command Sets Supported 00:11:59.659 NVM Command Set: Supported 00:11:59.659 Boot Partition: Not Supported 00:11:59.659 Memory Page Size Minimum: 4096 bytes 00:11:59.659 Memory Page Size Maximum: 4096 bytes 00:11:59.659 Persistent Memory Region: Not Supported 00:11:59.659 Optional Asynchronous Events Supported 00:11:59.659 Namespace Attribute Notices: Supported 00:11:59.659 Firmware Activation Notices: Not Supported 00:11:59.659 ANA Change Notices: Not Supported 00:11:59.659 PLE Aggregate Log Change Notices: Not Supported 00:11:59.659 LBA Status Info Alert Notices: Not Supported 00:11:59.659 EGE Aggregate Log Change Notices: Not Supported 00:11:59.659 Normal NVM Subsystem Shutdown event: Not Supported 00:11:59.659 Zone Descriptor Change Notices: Not Supported 00:11:59.659 Discovery Log Change Notices: Not Supported 00:11:59.659 Controller Attributes 00:11:59.659 128-bit Host Identifier: Supported 00:11:59.659 Non-Operational Permissive Mode: Not Supported 00:11:59.659 NVM Sets: Not Supported 00:11:59.659 Read Recovery Levels: Not Supported 00:11:59.659 Endurance Groups: Not Supported 00:11:59.659 Predictable Latency Mode: Not Supported 00:11:59.659 Traffic Based Keep ALive: Not Supported 00:11:59.659 Namespace Granularity: Not Supported 00:11:59.659 SQ Associations: Not Supported 00:11:59.659 UUID List: Not Supported 00:11:59.659 Multi-Domain Subsystem: Not Supported 00:11:59.659 Fixed Capacity Management: Not Supported 00:11:59.659 Variable Capacity Management: Not Supported 00:11:59.659 Delete Endurance Group: Not Supported 00:11:59.659 Delete NVM Set: Not Supported 00:11:59.659 Extended LBA Formats Supported: Not Supported 00:11:59.659 Flexible Data Placement Supported: Not Supported 00:11:59.659 00:11:59.659 Controller Memory Buffer Support 00:11:59.659 ================================ 00:11:59.659 Supported: No 00:11:59.659 00:11:59.659 Persistent Memory Region Support 00:11:59.659 ================================ 00:11:59.659 Supported: No 00:11:59.659 00:11:59.659 Admin Command Set Attributes 00:11:59.659 ============================ 00:11:59.659 Security Send/Receive: Not Supported 00:11:59.659 Format NVM: Not Supported 00:11:59.659 Firmware Activate/Download: Not Supported 00:11:59.659 Namespace Management: Not Supported 00:11:59.659 Device Self-Test: Not Supported 00:11:59.659 Directives: Not Supported 00:11:59.659 NVMe-MI: Not Supported 00:11:59.659 Virtualization Management: Not Supported 00:11:59.659 Doorbell Buffer Config: Not Supported 00:11:59.659 Get LBA Status Capability: Not Supported 00:11:59.659 Command & Feature Lockdown Capability: Not Supported 00:11:59.659 Abort Command Limit: 4 00:11:59.659 Async Event Request Limit: 4 00:11:59.659 Number of Firmware Slots: N/A 00:11:59.659 Firmware Slot 1 Read-Only: N/A 00:11:59.659 Firmware Activation Without Reset: N/A 00:11:59.659 Multiple Update Detection Support: N/A 00:11:59.659 Firmware Update Granularity: No Information Provided 00:11:59.659 Per-Namespace SMART Log: No 00:11:59.659 Asymmetric Namespace Access Log Page: Not Supported 00:11:59.659 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:59.659 Command Effects Log Page: Supported 00:11:59.659 Get Log Page Extended Data: Supported 00:11:59.659 Telemetry Log Pages: Not Supported 00:11:59.659 Persistent Event Log Pages: Not Supported 00:11:59.659 Supported Log Pages Log Page: May Support 00:11:59.659 Commands Supported & Effects Log Page: Not Supported 00:11:59.659 Feature Identifiers & Effects Log Page:May Support 00:11:59.659 NVMe-MI Commands & Effects Log Page: May Support 00:11:59.659 Data Area 4 for Telemetry Log: Not Supported 00:11:59.659 Error Log Page Entries Supported: 128 00:11:59.659 Keep Alive: Supported 00:11:59.659 Keep Alive Granularity: 10000 ms 00:11:59.659 00:11:59.659 NVM Command Set Attributes 00:11:59.659 ========================== 00:11:59.659 Submission Queue Entry Size 00:11:59.659 Max: 64 00:11:59.659 Min: 64 00:11:59.659 Completion Queue Entry Size 00:11:59.659 Max: 16 00:11:59.659 Min: 16 00:11:59.659 Number of Namespaces: 32 00:11:59.659 Compare Command: Supported 00:11:59.659 Write Uncorrectable Command: Not Supported 00:11:59.659 Dataset Management Command: Supported 00:11:59.659 Write Zeroes Command: Supported 00:11:59.659 Set Features Save Field: Not Supported 00:11:59.659 Reservations: Not Supported 00:11:59.659 Timestamp: Not Supported 00:11:59.659 Copy: Supported 00:11:59.659 Volatile Write Cache: Present 00:11:59.659 Atomic Write Unit (Normal): 1 00:11:59.659 Atomic Write Unit (PFail): 1 00:11:59.659 Atomic Compare & Write Unit: 1 00:11:59.659 Fused Compare & Write: Supported 00:11:59.659 Scatter-Gather List 00:11:59.659 SGL Command Set: Supported (Dword aligned) 00:11:59.659 SGL Keyed: Not Supported 00:11:59.659 SGL Bit Bucket Descriptor: Not Supported 00:11:59.659 SGL Metadata Pointer: Not Supported 00:11:59.659 Oversized SGL: Not Supported 00:11:59.659 SGL Metadata Address: Not Supported 00:11:59.659 SGL Offset: Not Supported 00:11:59.659 Transport SGL Data Block: Not Supported 00:11:59.659 Replay Protected Memory Block: Not Supported 00:11:59.659 00:11:59.659 Firmware Slot Information 00:11:59.659 ========================= 00:11:59.659 Active slot: 1 00:11:59.659 Slot 1 Firmware Revision: 24.09 00:11:59.659 00:11:59.659 00:11:59.659 Commands Supported and Effects 00:11:59.659 ============================== 00:11:59.659 Admin Commands 00:11:59.659 -------------- 00:11:59.659 Get Log Page (02h): Supported 00:11:59.659 Identify (06h): Supported 00:11:59.659 Abort (08h): Supported 00:11:59.659 Set Features (09h): Supported 00:11:59.659 Get Features (0Ah): Supported 00:11:59.659 Asynchronous Event Request (0Ch): Supported 00:11:59.659 Keep Alive (18h): Supported 00:11:59.659 I/O Commands 00:11:59.659 ------------ 00:11:59.659 Flush (00h): Supported LBA-Change 00:11:59.659 Write (01h): Supported LBA-Change 00:11:59.659 Read (02h): Supported 00:11:59.659 Compare (05h): Supported 00:11:59.659 Write Zeroes (08h): Supported LBA-Change 00:11:59.659 Dataset Management (09h): Supported LBA-Change 00:11:59.659 Copy (19h): Supported LBA-Change 00:11:59.659 00:11:59.659 Error Log 00:11:59.659 ========= 00:11:59.659 00:11:59.659 Arbitration 00:11:59.659 =========== 00:11:59.660 Arbitration Burst: 1 00:11:59.660 00:11:59.660 Power Management 00:11:59.660 ================ 00:11:59.660 Number of Power States: 1 00:11:59.660 Current Power State: Power State #0 00:11:59.660 Power State #0: 00:11:59.660 Max Power: 0.00 W 00:11:59.660 Non-Operational State: Operational 00:11:59.660 Entry Latency: Not Reported 00:11:59.660 Exit Latency: Not Reported 00:11:59.660 Relative Read Throughput: 0 00:11:59.660 Relative Read Latency: 0 00:11:59.660 Relative Write Throughput: 0 00:11:59.660 Relative Write Latency: 0 00:11:59.660 Idle Power: Not Reported 00:11:59.660 Active Power: Not Reported 00:11:59.660 Non-Operational Permissive Mode: Not Supported 00:11:59.660 00:11:59.660 Health Information 00:11:59.660 ================== 00:11:59.660 Critical Warnings: 00:11:59.660 Available Spare Space: OK 00:11:59.660 Temperature: OK 00:11:59.660 Device Reliability: OK 00:11:59.660 Read Only: No 00:11:59.660 Volatile Memory Backup: OK 00:11:59.660 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:59.660 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:59.660 Available Spare: 0% 00:11:59.660 Available Sp[2024-07-25 14:39:19.853166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:59.660 [2024-07-25 14:39:19.861049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:59.660 [2024-07-25 14:39:19.861082] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:59.660 [2024-07-25 14:39:19.861090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.660 [2024-07-25 14:39:19.861096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.660 [2024-07-25 14:39:19.861101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.660 [2024-07-25 14:39:19.861108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.660 [2024-07-25 14:39:19.861149] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:59.660 [2024-07-25 14:39:19.861159] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:59.660 [2024-07-25 14:39:19.862154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:59.660 [2024-07-25 14:39:19.862195] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:59.660 [2024-07-25 14:39:19.862201] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:59.660 [2024-07-25 14:39:19.863165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:59.660 [2024-07-25 14:39:19.863176] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:59.660 [2024-07-25 14:39:19.863221] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:59.660 [2024-07-25 14:39:19.864196] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.660 are Threshold: 0% 00:11:59.660 Life Percentage Used: 0% 00:11:59.660 Data Units Read: 0 00:11:59.660 Data Units Written: 0 00:11:59.660 Host Read Commands: 0 00:11:59.660 Host Write Commands: 0 00:11:59.660 Controller Busy Time: 0 minutes 00:11:59.660 Power Cycles: 0 00:11:59.660 Power On Hours: 0 hours 00:11:59.660 Unsafe Shutdowns: 0 00:11:59.660 Unrecoverable Media Errors: 0 00:11:59.660 Lifetime Error Log Entries: 0 00:11:59.660 Warning Temperature Time: 0 minutes 00:11:59.660 Critical Temperature Time: 0 minutes 00:11:59.660 00:11:59.660 Number of Queues 00:11:59.660 ================ 00:11:59.660 Number of I/O Submission Queues: 127 00:11:59.660 Number of I/O Completion Queues: 127 00:11:59.660 00:11:59.660 Active Namespaces 00:11:59.660 ================= 00:11:59.660 Namespace ID:1 00:11:59.660 Error Recovery Timeout: Unlimited 00:11:59.660 Command Set Identifier: NVM (00h) 00:11:59.660 Deallocate: Supported 00:11:59.660 Deallocated/Unwritten Error: Not Supported 00:11:59.660 Deallocated Read Value: Unknown 00:11:59.660 Deallocate in Write Zeroes: Not Supported 00:11:59.660 Deallocated Guard Field: 0xFFFF 00:11:59.660 Flush: Supported 00:11:59.660 Reservation: Supported 00:11:59.660 Namespace Sharing Capabilities: Multiple Controllers 00:11:59.660 Size (in LBAs): 131072 (0GiB) 00:11:59.660 Capacity (in LBAs): 131072 (0GiB) 00:11:59.660 Utilization (in LBAs): 131072 (0GiB) 00:11:59.660 NGUID: 49FF0A445420458F8AE6A6192E1FB031 00:11:59.660 UUID: 49ff0a44-5420-458f-8ae6-a6192e1fb031 00:11:59.660 Thin Provisioning: Not Supported 00:11:59.660 Per-NS Atomic Units: Yes 00:11:59.660 Atomic Boundary Size (Normal): 0 00:11:59.660 Atomic Boundary Size (PFail): 0 00:11:59.660 Atomic Boundary Offset: 0 00:11:59.660 Maximum Single Source Range Length: 65535 00:11:59.660 Maximum Copy Length: 65535 00:11:59.660 Maximum Source Range Count: 1 00:11:59.660 NGUID/EUI64 Never Reused: No 00:11:59.660 Namespace Write Protected: No 00:11:59.660 Number of LBA Formats: 1 00:11:59.660 Current LBA Format: LBA Format #00 00:11:59.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:59.660 00:11:59.660 14:39:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:59.660 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.920 [2024-07-25 14:39:20.081543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:05.204 Initializing NVMe Controllers 00:12:05.204 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:05.204 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:05.204 Initialization complete. Launching workers. 00:12:05.204 ======================================================== 00:12:05.204 Latency(us) 00:12:05.204 Device Information : IOPS MiB/s Average min max 00:12:05.204 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.52 156.01 3204.62 971.52 7616.38 00:12:05.204 ======================================================== 00:12:05.204 Total : 39937.52 156.01 3204.62 971.52 7616.38 00:12:05.204 00:12:05.204 [2024-07-25 14:39:25.187293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:05.204 14:39:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:05.204 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.204 [2024-07-25 14:39:25.411970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:10.488 Initializing NVMe Controllers 00:12:10.488 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:10.488 Initialization complete. Launching workers. 00:12:10.488 ======================================================== 00:12:10.488 Latency(us) 00:12:10.488 Device Information : IOPS MiB/s Average min max 00:12:10.488 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39930.87 155.98 3205.13 958.22 6626.51 00:12:10.488 ======================================================== 00:12:10.488 Total : 39930.87 155.98 3205.13 958.22 6626.51 00:12:10.488 00:12:10.488 [2024-07-25 14:39:30.431338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.488 14:39:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:10.488 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.488 [2024-07-25 14:39:30.616716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:15.780 [2024-07-25 14:39:35.748173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:15.780 Initializing NVMe Controllers 00:12:15.780 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.780 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.780 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:15.780 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:15.780 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:15.780 Initialization complete. Launching workers. 00:12:15.780 Starting thread on core 2 00:12:15.780 Starting thread on core 3 00:12:15.780 Starting thread on core 1 00:12:15.780 14:39:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.780 [2024-07-25 14:39:36.031550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:19.075 [2024-07-25 14:39:39.235245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:19.075 Initializing NVMe Controllers 00:12:19.075 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.075 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.075 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:19.075 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:19.075 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:19.075 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:19.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:19.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:19.075 Initialization complete. Launching workers. 00:12:19.075 Starting thread on core 1 with urgent priority queue 00:12:19.075 Starting thread on core 2 with urgent priority queue 00:12:19.075 Starting thread on core 3 with urgent priority queue 00:12:19.075 Starting thread on core 0 with urgent priority queue 00:12:19.075 SPDK bdev Controller (SPDK2 ) core 0: 6007.67 IO/s 16.65 secs/100000 ios 00:12:19.075 SPDK bdev Controller (SPDK2 ) core 1: 5027.00 IO/s 19.89 secs/100000 ios 00:12:19.075 SPDK bdev Controller (SPDK2 ) core 2: 6216.67 IO/s 16.09 secs/100000 ios 00:12:19.075 SPDK bdev Controller (SPDK2 ) core 3: 4820.00 IO/s 20.75 secs/100000 ios 00:12:19.075 ======================================================== 00:12:19.075 00:12:19.075 14:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:19.075 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.334 [2024-07-25 14:39:39.507012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:19.334 Initializing NVMe Controllers 00:12:19.334 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.335 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.335 Namespace ID: 1 size: 0GB 00:12:19.335 Initialization complete. 00:12:19.335 INFO: using host memory buffer for IO 00:12:19.335 Hello world! 00:12:19.335 [2024-07-25 14:39:39.518106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:19.335 14:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:19.335 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.594 [2024-07-25 14:39:39.786931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.973 Initializing NVMe Controllers 00:12:20.973 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.973 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.973 Initialization complete. Launching workers. 00:12:20.973 submit (in ns) avg, min, max = 8184.8, 3267.8, 4013726.1 00:12:20.973 complete (in ns) avg, min, max = 20356.7, 1824.3, 6990508.7 00:12:20.973 00:12:20.973 Submit histogram 00:12:20.973 ================ 00:12:20.973 Range in us Cumulative Count 00:12:20.973 3.256 - 3.270: 0.0123% ( 2) 00:12:20.973 3.270 - 3.283: 0.2221% ( 34) 00:12:20.973 3.283 - 3.297: 1.0119% ( 128) 00:12:20.973 3.297 - 3.311: 2.2521% ( 201) 00:12:20.973 3.311 - 3.325: 3.7576% ( 244) 00:12:20.973 3.325 - 3.339: 6.2627% ( 406) 00:12:20.973 3.339 - 3.353: 10.2672% ( 649) 00:12:20.973 3.353 - 3.367: 15.7895% ( 895) 00:12:20.973 3.367 - 3.381: 21.6511% ( 950) 00:12:20.973 3.381 - 3.395: 27.6485% ( 972) 00:12:20.973 3.395 - 3.409: 33.1277% ( 888) 00:12:20.973 3.409 - 3.423: 38.3353% ( 844) 00:12:20.973 3.423 - 3.437: 43.4010% ( 821) 00:12:20.973 3.437 - 3.450: 49.4169% ( 975) 00:12:20.973 3.450 - 3.464: 54.0137% ( 745) 00:12:20.973 3.464 - 3.478: 58.0243% ( 650) 00:12:20.973 3.478 - 3.492: 62.9728% ( 802) 00:12:20.973 3.492 - 3.506: 69.0628% ( 987) 00:12:20.973 3.506 - 3.520: 73.4806% ( 716) 00:12:20.973 3.520 - 3.534: 76.7261% ( 526) 00:12:20.973 3.534 - 3.548: 80.5393% ( 618) 00:12:20.973 3.548 - 3.562: 83.3467% ( 455) 00:12:20.973 3.562 - 3.590: 86.2529% ( 471) 00:12:20.973 3.590 - 3.617: 87.5548% ( 211) 00:12:20.973 3.617 - 3.645: 88.7333% ( 191) 00:12:20.973 3.645 - 3.673: 90.2079% ( 239) 00:12:20.973 3.673 - 3.701: 92.0281% ( 295) 00:12:20.973 3.701 - 3.729: 94.0581% ( 329) 00:12:20.973 3.729 - 3.757: 95.6439% ( 257) 00:12:20.973 3.757 - 3.784: 96.9581% ( 213) 00:12:20.973 3.784 - 3.812: 97.9330% ( 158) 00:12:20.973 3.812 - 3.840: 98.4945% ( 91) 00:12:20.973 3.840 - 3.868: 98.8955% ( 65) 00:12:20.973 3.868 - 3.896: 99.1423% ( 40) 00:12:20.973 3.896 - 3.923: 99.2287% ( 14) 00:12:20.973 3.923 - 3.951: 99.2596% ( 5) 00:12:20.973 3.951 - 3.979: 99.2719% ( 2) 00:12:20.973 3.979 - 4.007: 99.2966% ( 4) 00:12:20.973 4.007 - 4.035: 99.3089% ( 2) 00:12:20.973 4.035 - 4.063: 99.3213% ( 2) 00:12:20.973 4.063 - 4.090: 99.3275% ( 1) 00:12:20.973 4.090 - 4.118: 99.3460% ( 3) 00:12:20.973 4.118 - 4.146: 99.3645% ( 3) 00:12:20.973 4.146 - 4.174: 99.3706% ( 1) 00:12:20.973 4.174 - 4.202: 99.3830% ( 2) 00:12:20.973 4.230 - 4.257: 99.3892% ( 1) 00:12:20.973 4.285 - 4.313: 99.3953% ( 1) 00:12:20.973 4.313 - 4.341: 99.4015% ( 1) 00:12:20.973 4.341 - 4.369: 99.4077% ( 1) 00:12:20.973 4.480 - 4.508: 99.4138% ( 1) 00:12:20.973 4.563 - 4.591: 99.4200% ( 1) 00:12:20.973 4.591 - 4.619: 99.4262% ( 1) 00:12:20.973 4.619 - 4.647: 99.4323% ( 1) 00:12:20.973 4.647 - 4.675: 99.4385% ( 1) 00:12:20.973 4.758 - 4.786: 99.4447% ( 1) 00:12:20.973 4.814 - 4.842: 99.4509% ( 1) 00:12:20.973 4.842 - 4.870: 99.4632% ( 2) 00:12:20.973 5.009 - 5.037: 99.4694% ( 1) 00:12:20.973 5.064 - 5.092: 99.4755% ( 1) 00:12:20.973 5.176 - 5.203: 99.4817% ( 1) 00:12:20.973 5.510 - 5.537: 99.4940% ( 2) 00:12:20.973 5.593 - 5.621: 99.5002% ( 1) 00:12:20.973 5.649 - 5.677: 99.5064% ( 1) 00:12:20.973 5.704 - 5.732: 99.5187% ( 2) 00:12:20.973 5.732 - 5.760: 99.5249% ( 1) 00:12:20.973 5.760 - 5.788: 99.5372% ( 2) 00:12:20.973 5.788 - 5.816: 99.5434% ( 1) 00:12:20.973 5.843 - 5.871: 99.5557% ( 2) 00:12:20.973 5.927 - 5.955: 99.5619% ( 1) 00:12:20.973 5.955 - 5.983: 99.5681% ( 1) 00:12:20.973 5.983 - 6.010: 99.5743% ( 1) 00:12:20.973 6.010 - 6.038: 99.5804% ( 1) 00:12:20.973 6.122 - 6.150: 99.6051% ( 4) 00:12:20.973 6.177 - 6.205: 99.6113% ( 1) 00:12:20.973 6.205 - 6.233: 99.6174% ( 1) 00:12:20.973 6.233 - 6.261: 99.6236% ( 1) 00:12:20.973 6.261 - 6.289: 99.6360% ( 2) 00:12:20.973 6.289 - 6.317: 99.6421% ( 1) 00:12:20.973 6.344 - 6.372: 99.6483% ( 1) 00:12:20.973 6.456 - 6.483: 99.6606% ( 2) 00:12:20.973 6.595 - 6.623: 99.6668% ( 1) 00:12:20.973 6.678 - 6.706: 99.6730% ( 1) 00:12:20.973 6.845 - 6.873: 99.6792% ( 1) 00:12:20.973 6.929 - 6.957: 99.6853% ( 1) 00:12:20.973 7.040 - 7.068: 99.6915% ( 1) 00:12:20.973 7.123 - 7.179: 99.6977% ( 1) 00:12:20.973 7.179 - 7.235: 99.7038% ( 1) 00:12:20.973 7.235 - 7.290: 99.7162% ( 2) 00:12:20.973 7.290 - 7.346: 99.7285% ( 2) 00:12:20.973 7.457 - 7.513: 99.7347% ( 1) 00:12:20.973 8.070 - 8.125: 99.7409% ( 1) 00:12:20.973 8.125 - 8.181: 99.7470% ( 1) 00:12:20.973 8.237 - 8.292: 99.7532% ( 1) 00:12:20.973 8.682 - 8.737: 99.7594% ( 1) 00:12:20.973 8.849 - 8.904: 99.7655% ( 1) 00:12:20.973 8.960 - 9.016: 99.7717% ( 1) 00:12:20.973 9.183 - 9.238: 99.7779% ( 1) 00:12:20.973 9.850 - 9.906: 99.7840% ( 1) 00:12:20.973 11.130 - 11.186: 99.7902% ( 1) 00:12:20.973 12.132 - 12.188: 99.7964% ( 1) 00:12:20.973 12.355 - 12.410: 99.8026% ( 1) 00:12:20.973 13.635 - 13.690: 99.8087% ( 1) 00:12:20.973 13.969 - 14.024: 99.8149% ( 1) 00:12:20.973 14.358 - 14.470: 99.8211% ( 1) 00:12:20.973 15.694 - 15.805: 99.8334% ( 2) 00:12:20.973 15.805 - 15.917: 99.8396% ( 1) 00:12:20.973 17.030 - 17.141: 99.8457% ( 1) 00:12:20.973 17.252 - 17.363: 99.8519% ( 1) 00:12:20.973 22.150 - 22.261: 99.8581% ( 1) 00:12:20.973 22.929 - 23.040: 99.8643% ( 1) 00:12:20.973 26.157 - 26.268: 99.8704% ( 1) 00:12:20.973 31.388 - 31.610: 99.8766% ( 1) 00:12:20.973 56.097 - 56.320: 99.8828% ( 1) 00:12:20.973 3989.148 - 4017.642: 100.0000% ( 19) 00:12:20.973 00:12:20.973 Complete histogram 00:12:20.973 ================== 00:12:20.973 Range in us Cumulative Count 00:12:20.973 1.823 - 1.837: 0.2653% ( 43) 00:12:20.973 1.837 - 1.850: 1.2340% ( 157) 00:12:20.973 1.850 - 1.864: 2.6223% ( 225) 00:12:20.973 1.864 - 1.878: 5.3002% ( 434) 00:12:20.973 1.878 - 1.892: 49.1454% ( 7106) 00:12:20.973 1.892 - 1.906: 88.5728% ( 6390) 00:12:20.973 1.906 - 1.920: 93.7064% ( 832) 00:12:20.973 1.920 - 1.934: 95.5328% ( 296) 00:12:20.973 1.934 - 1.948: 95.9894% ( 74) 00:12:20.973 1.948 - 1.962: 96.8347% ( 137) 00:12:20.973 1.962 - 1.976: 97.7664% ( 151) 00:12:20.973 1.976 - 1.990: 98.2970% ( 86) 00:12:20.973 1.990 - 2.003: 98.4081% ( 18) 00:12:20.973 2.003 - 2.017: 98.4636% ( 9) 00:12:20.973 2.017 - 2.031: 98.5068% ( 7) 00:12:20.973 2.031 - 2.045: 98.5192% ( 2) 00:12:20.973 2.045 - 2.059: 98.5438% ( 4) 00:12:20.973 2.059 - 2.073: 98.6672% ( 20) 00:12:20.973 2.073 - 2.087: 98.8955% ( 37) 00:12:20.973 2.087 - 2.101: 98.9758% ( 13) 00:12:20.973 2.101 - 2.115: 99.0066% ( 5) 00:12:20.973 2.115 - 2.129: 99.0313% ( 4) 00:12:20.973 2.129 - 2.143: 99.0621% ( 5) 00:12:20.973 2.143 - 2.157: 99.0868% ( 4) 00:12:20.973 2.157 - 2.170: 99.1177% ( 5) 00:12:20.973 2.170 - 2.184: 99.1362% ( 3) 00:12:20.973 2.212 - 2.226: 99.1485% ( 2) 00:12:20.973 2.226 - 2.240: 99.1547% ( 1) 00:12:20.973 2.240 - 2.254: 99.1670% ( 2) 00:12:20.973 2.268 - 2.282: 99.1732% ( 1) 00:12:20.973 2.310 - 2.323: 99.1794% ( 1) 00:12:20.973 2.323 - 2.337: 99.1855% ( 1) 00:12:20.973 2.351 - 2.365: 99.1979% ( 2) 00:12:20.973 2.435 - 2.449: 99.2040% ( 1) 00:12:20.973 2.449 - 2.463: 99.2102% ( 1) 00:12:20.973 2.546 - 2.560: 99.2164% ( 1) 00:12:20.973 2.602 - 2.616: 99.2226% ( 1) 00:12:20.973 2.616 - 2.630: 99.2287% ( 1) 00:12:20.974 2.643 - 2.657: 99.2349% ( 1) 00:12:20.974 2.699 - 2.713: 99.2411% ( 1) 00:12:20.974 2.755 - 2.769: 99.2472% ( 1) 00:12:20.974 2.866 - 2.880: 99.2534% ( 1) 00:12:20.974 2.894 - 2.908: 99.2596% ( 1) 00:12:20.974 3.075 - 3.089: 99.2657% ( 1) 00:12:20.974 3.423 - 3.437: 99.2719% ( 1) 00:12:20.974 3.645 - 3.673: 99.2781% ( 1) 00:12:20.974 3.784 - 3.812: 99.2843% ( 1) 00:12:20.974 3.868 - 3.896: 99.2904% ( 1) 00:12:20.974 3.979 - 4.007: 99.3028% ( 2) 00:12:20.974 4.035 - 4.063: 99.3089% ( 1) 00:12:20.974 4.063 - 4.0[2024-07-25 14:39:40.880157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.974 90: 99.3151% ( 1) 00:12:20.974 4.118 - 4.146: 99.3213% ( 1) 00:12:20.974 4.174 - 4.202: 99.3275% ( 1) 00:12:20.974 4.230 - 4.257: 99.3336% ( 1) 00:12:20.974 4.285 - 4.313: 99.3398% ( 1) 00:12:20.974 4.313 - 4.341: 99.3460% ( 1) 00:12:20.974 4.341 - 4.369: 99.3583% ( 2) 00:12:20.974 4.369 - 4.397: 99.3645% ( 1) 00:12:20.974 4.452 - 4.480: 99.3706% ( 1) 00:12:20.974 4.536 - 4.563: 99.3768% ( 1) 00:12:20.974 4.619 - 4.647: 99.3830% ( 1) 00:12:20.974 4.703 - 4.730: 99.3892% ( 1) 00:12:20.974 4.758 - 4.786: 99.3953% ( 1) 00:12:20.974 5.009 - 5.037: 99.4015% ( 1) 00:12:20.974 5.203 - 5.231: 99.4077% ( 1) 00:12:20.974 5.398 - 5.426: 99.4138% ( 1) 00:12:20.974 5.454 - 5.482: 99.4200% ( 1) 00:12:20.974 5.482 - 5.510: 99.4262% ( 1) 00:12:20.974 5.593 - 5.621: 99.4323% ( 1) 00:12:20.974 5.871 - 5.899: 99.4385% ( 1) 00:12:20.974 6.122 - 6.150: 99.4447% ( 1) 00:12:20.974 7.012 - 7.040: 99.4509% ( 1) 00:12:20.974 7.123 - 7.179: 99.4570% ( 1) 00:12:20.974 7.290 - 7.346: 99.4632% ( 1) 00:12:20.974 7.346 - 7.402: 99.4694% ( 1) 00:12:20.974 7.569 - 7.624: 99.4755% ( 1) 00:12:20.974 7.680 - 7.736: 99.4879% ( 2) 00:12:20.974 9.183 - 9.238: 99.4940% ( 1) 00:12:20.974 12.188 - 12.243: 99.5002% ( 1) 00:12:20.974 12.577 - 12.633: 99.5064% ( 1) 00:12:20.974 13.134 - 13.190: 99.5126% ( 1) 00:12:20.974 14.024 - 14.080: 99.5187% ( 1) 00:12:20.974 17.252 - 17.363: 99.5249% ( 1) 00:12:20.974 24.821 - 24.932: 99.5311% ( 1) 00:12:20.974 36.508 - 36.730: 99.5372% ( 1) 00:12:20.974 1011.534 - 1018.657: 99.5434% ( 1) 00:12:20.974 3362.282 - 3376.529: 99.5496% ( 1) 00:12:20.974 3989.148 - 4017.642: 99.9938% ( 72) 00:12:20.974 6981.009 - 7009.503: 100.0000% ( 1) 00:12:20.974 00:12:20.974 14:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:20.974 14:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:20.974 14:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:20.974 14:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:20.974 14:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.974 [ 00:12:20.974 { 00:12:20.974 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.974 "subtype": "Discovery", 00:12:20.974 "listen_addresses": [], 00:12:20.974 "allow_any_host": true, 00:12:20.974 "hosts": [] 00:12:20.974 }, 00:12:20.974 { 00:12:20.974 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.974 "subtype": "NVMe", 00:12:20.974 "listen_addresses": [ 00:12:20.974 { 00:12:20.974 "trtype": "VFIOUSER", 00:12:20.974 "adrfam": "IPv4", 00:12:20.974 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.974 "trsvcid": "0" 00:12:20.974 } 00:12:20.974 ], 00:12:20.974 "allow_any_host": true, 00:12:20.974 "hosts": [], 00:12:20.974 "serial_number": "SPDK1", 00:12:20.974 "model_number": "SPDK bdev Controller", 00:12:20.974 "max_namespaces": 32, 00:12:20.974 "min_cntlid": 1, 00:12:20.974 "max_cntlid": 65519, 00:12:20.974 "namespaces": [ 00:12:20.974 { 00:12:20.974 "nsid": 1, 00:12:20.974 "bdev_name": "Malloc1", 00:12:20.974 "name": "Malloc1", 00:12:20.974 "nguid": "56C564D558244ED3B8FB31DF8E66CFFE", 00:12:20.974 "uuid": "56c564d5-5824-4ed3-b8fb-31df8e66cffe" 00:12:20.974 }, 00:12:20.974 { 00:12:20.974 "nsid": 2, 00:12:20.974 "bdev_name": "Malloc3", 00:12:20.974 "name": "Malloc3", 00:12:20.974 "nguid": "73C401DE369A4C75BFC9B889ADC5DBF4", 00:12:20.974 "uuid": "73c401de-369a-4c75-bfc9-b889adc5dbf4" 00:12:20.974 } 00:12:20.974 ] 00:12:20.974 }, 00:12:20.974 { 00:12:20.974 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.974 "subtype": "NVMe", 00:12:20.974 "listen_addresses": [ 00:12:20.974 { 00:12:20.974 "trtype": "VFIOUSER", 00:12:20.974 "adrfam": "IPv4", 00:12:20.974 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.974 "trsvcid": "0" 00:12:20.974 } 00:12:20.974 ], 00:12:20.974 "allow_any_host": true, 00:12:20.974 "hosts": [], 00:12:20.974 "serial_number": "SPDK2", 00:12:20.974 "model_number": "SPDK bdev Controller", 00:12:20.974 "max_namespaces": 32, 00:12:20.974 "min_cntlid": 1, 00:12:20.974 "max_cntlid": 65519, 00:12:20.974 "namespaces": [ 00:12:20.974 { 00:12:20.974 "nsid": 1, 00:12:20.974 "bdev_name": "Malloc2", 00:12:20.974 "name": "Malloc2", 00:12:20.974 "nguid": "49FF0A445420458F8AE6A6192E1FB031", 00:12:20.974 "uuid": "49ff0a44-5420-458f-8ae6-a6192e1fb031" 00:12:20.974 } 00:12:20.974 ] 00:12:20.974 } 00:12:20.974 ] 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2257516 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:20.974 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:20.974 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.974 [2024-07-25 14:39:41.255432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.234 Malloc4 00:12:21.234 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:21.234 [2024-07-25 14:39:41.490178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.234 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:21.494 Asynchronous Event Request test 00:12:21.494 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.494 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.494 Registering asynchronous event callbacks... 00:12:21.494 Starting namespace attribute notice tests for all controllers... 00:12:21.494 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:21.494 aer_cb - Changed Namespace 00:12:21.494 Cleaning up... 00:12:21.494 [ 00:12:21.494 { 00:12:21.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:21.494 "subtype": "Discovery", 00:12:21.494 "listen_addresses": [], 00:12:21.494 "allow_any_host": true, 00:12:21.494 "hosts": [] 00:12:21.494 }, 00:12:21.494 { 00:12:21.494 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:21.494 "subtype": "NVMe", 00:12:21.494 "listen_addresses": [ 00:12:21.494 { 00:12:21.494 "trtype": "VFIOUSER", 00:12:21.494 "adrfam": "IPv4", 00:12:21.494 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:21.494 "trsvcid": "0" 00:12:21.494 } 00:12:21.494 ], 00:12:21.494 "allow_any_host": true, 00:12:21.494 "hosts": [], 00:12:21.494 "serial_number": "SPDK1", 00:12:21.494 "model_number": "SPDK bdev Controller", 00:12:21.494 "max_namespaces": 32, 00:12:21.494 "min_cntlid": 1, 00:12:21.494 "max_cntlid": 65519, 00:12:21.494 "namespaces": [ 00:12:21.494 { 00:12:21.494 "nsid": 1, 00:12:21.494 "bdev_name": "Malloc1", 00:12:21.494 "name": "Malloc1", 00:12:21.494 "nguid": "56C564D558244ED3B8FB31DF8E66CFFE", 00:12:21.494 "uuid": "56c564d5-5824-4ed3-b8fb-31df8e66cffe" 00:12:21.494 }, 00:12:21.494 { 00:12:21.494 "nsid": 2, 00:12:21.494 "bdev_name": "Malloc3", 00:12:21.494 "name": "Malloc3", 00:12:21.494 "nguid": "73C401DE369A4C75BFC9B889ADC5DBF4", 00:12:21.494 "uuid": "73c401de-369a-4c75-bfc9-b889adc5dbf4" 00:12:21.494 } 00:12:21.494 ] 00:12:21.494 }, 00:12:21.494 { 00:12:21.494 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:21.494 "subtype": "NVMe", 00:12:21.494 "listen_addresses": [ 00:12:21.494 { 00:12:21.494 "trtype": "VFIOUSER", 00:12:21.494 "adrfam": "IPv4", 00:12:21.494 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:21.494 "trsvcid": "0" 00:12:21.494 } 00:12:21.494 ], 00:12:21.494 "allow_any_host": true, 00:12:21.494 "hosts": [], 00:12:21.494 "serial_number": "SPDK2", 00:12:21.494 "model_number": "SPDK bdev Controller", 00:12:21.494 "max_namespaces": 32, 00:12:21.494 "min_cntlid": 1, 00:12:21.494 "max_cntlid": 65519, 00:12:21.494 "namespaces": [ 00:12:21.494 { 00:12:21.494 "nsid": 1, 00:12:21.494 "bdev_name": "Malloc2", 00:12:21.494 "name": "Malloc2", 00:12:21.494 "nguid": "49FF0A445420458F8AE6A6192E1FB031", 00:12:21.494 "uuid": "49ff0a44-5420-458f-8ae6-a6192e1fb031" 00:12:21.494 }, 00:12:21.494 { 00:12:21.494 "nsid": 2, 00:12:21.494 "bdev_name": "Malloc4", 00:12:21.494 "name": "Malloc4", 00:12:21.494 "nguid": "38DDEB29AF974C699F6041A96031DC4A", 00:12:21.494 "uuid": "38ddeb29-af97-4c69-9f60-41a96031dc4a" 00:12:21.495 } 00:12:21.495 ] 00:12:21.495 } 00:12:21.495 ] 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2257516 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2249190 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2249190 ']' 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2249190 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249190 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249190' 00:12:21.495 killing process with pid 2249190 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2249190 00:12:21.495 14:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2249190 00:12:21.755 14:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2257574 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2257574' 00:12:21.755 Process pid: 2257574 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2257574 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2257574 ']' 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.755 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:22.015 [2024-07-25 14:39:42.050696] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:22.015 [2024-07-25 14:39:42.051626] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:12:22.015 [2024-07-25 14:39:42.051666] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.015 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.015 [2024-07-25 14:39:42.105870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.015 [2024-07-25 14:39:42.186780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.015 [2024-07-25 14:39:42.186818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.015 [2024-07-25 14:39:42.186825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.015 [2024-07-25 14:39:42.186831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.015 [2024-07-25 14:39:42.186836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.015 [2024-07-25 14:39:42.186900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.015 [2024-07-25 14:39:42.186993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.015 [2024-07-25 14:39:42.187076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.015 [2024-07-25 14:39:42.187078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.015 [2024-07-25 14:39:42.261068] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:22.015 [2024-07-25 14:39:42.261203] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:22.015 [2024-07-25 14:39:42.261361] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:22.015 [2024-07-25 14:39:42.261665] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:22.015 [2024-07-25 14:39:42.261843] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:22.585 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.585 14:39:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:22.585 14:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:23.967 14:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.967 Malloc1 00:12:23.967 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:24.226 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:24.485 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.746 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.746 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.746 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.746 Malloc2 00:12:24.746 14:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:25.006 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2257574 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2257574 ']' 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2257574 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.265 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2257574 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2257574' 00:12:25.525 killing process with pid 2257574 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2257574 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2257574 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.525 00:12:25.525 real 0m51.377s 00:12:25.525 user 3m23.418s 00:12:25.525 sys 0m3.615s 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.525 14:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:25.525 ************************************ 00:12:25.525 END TEST nvmf_vfio_user 00:12:25.525 ************************************ 00:12:25.784 14:39:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.784 14:39:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.784 14:39:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.784 14:39:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.784 14:39:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.784 ************************************ 00:12:25.784 START TEST nvmf_vfio_user_nvme_compliance 00:12:25.784 ************************************ 00:12:25.784 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.784 * Looking for test storage... 00:12:25.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:25.784 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.784 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:25.784 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2258327 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2258327' 00:12:25.785 Process pid: 2258327 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2258327 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2258327 ']' 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.785 14:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.785 [2024-07-25 14:39:46.026402] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:12:25.785 [2024-07-25 14:39:46.026450] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.045 [2024-07-25 14:39:46.080923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.045 [2024-07-25 14:39:46.154586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.045 [2024-07-25 14:39:46.154628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.045 [2024-07-25 14:39:46.154635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.045 [2024-07-25 14:39:46.154641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.045 [2024-07-25 14:39:46.154646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.045 [2024-07-25 14:39:46.154694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.045 [2024-07-25 14:39:46.154790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.045 [2024-07-25 14:39:46.154792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.611 14:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.612 14:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:26.612 14:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:27.551 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:27.551 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:27.551 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:27.551 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.551 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 malloc0 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.811 14:39:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:27.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.811 00:12:27.811 00:12:27.811 CUnit - A unit testing framework for C - Version 2.1-3 00:12:27.811 http://cunit.sourceforge.net/ 00:12:27.811 00:12:27.811 00:12:27.811 Suite: nvme_compliance 00:12:27.811 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 14:39:48.051339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.811 [2024-07-25 14:39:48.052675] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:27.811 [2024-07-25 14:39:48.052690] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:27.811 [2024-07-25 14:39:48.052696] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:27.811 [2024-07-25 14:39:48.054360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.811 passed 00:12:28.070 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 14:39:48.132925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.070 [2024-07-25 14:39:48.135946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.070 passed 00:12:28.070 Test: admin_identify_ns ...[2024-07-25 14:39:48.216487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.070 [2024-07-25 14:39:48.275055] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:28.070 [2024-07-25 14:39:48.283054] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:28.070 [2024-07-25 14:39:48.304150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.070 passed 00:12:28.330 Test: admin_get_features_mandatory_features ...[2024-07-25 14:39:48.382104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.330 [2024-07-25 14:39:48.387135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.330 passed 00:12:28.330 Test: admin_get_features_optional_features ...[2024-07-25 14:39:48.462646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.330 [2024-07-25 14:39:48.465664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.330 passed 00:12:28.330 Test: admin_set_features_number_of_queues ...[2024-07-25 14:39:48.543607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.590 [2024-07-25 14:39:48.649145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.590 passed 00:12:28.590 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 14:39:48.728421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.590 [2024-07-25 14:39:48.731442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.590 passed 00:12:28.590 Test: admin_get_log_page_with_lpo ...[2024-07-25 14:39:48.806468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.590 [2024-07-25 14:39:48.878064] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:28.850 [2024-07-25 14:39:48.891109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.850 passed 00:12:28.850 Test: fabric_property_get ...[2024-07-25 14:39:48.967266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.850 [2024-07-25 14:39:48.968505] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:28.850 [2024-07-25 14:39:48.970286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.850 passed 00:12:28.850 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 14:39:49.048800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.850 [2024-07-25 14:39:49.050027] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:28.850 [2024-07-25 14:39:49.051817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.850 passed 00:12:28.850 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 14:39:49.128499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.109 [2024-07-25 14:39:49.216051] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:29.109 [2024-07-25 14:39:49.232058] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:29.109 [2024-07-25 14:39:49.237149] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.109 passed 00:12:29.109 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 14:39:49.312356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.109 [2024-07-25 14:39:49.313596] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:29.109 [2024-07-25 14:39:49.315384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.109 passed 00:12:29.109 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 14:39:49.395533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.368 [2024-07-25 14:39:49.472049] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:29.368 [2024-07-25 14:39:49.496053] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:29.368 [2024-07-25 14:39:49.501134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.368 passed 00:12:29.368 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 14:39:49.576389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.368 [2024-07-25 14:39:49.577626] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:29.368 [2024-07-25 14:39:49.577651] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:29.368 [2024-07-25 14:39:49.579415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.368 passed 00:12:29.368 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 14:39:49.652311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.628 [2024-07-25 14:39:49.748052] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:29.628 [2024-07-25 14:39:49.756049] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:29.628 [2024-07-25 14:39:49.764050] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:29.628 [2024-07-25 14:39:49.772049] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:29.628 [2024-07-25 14:39:49.801127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.628 passed 00:12:29.628 Test: admin_create_io_sq_verify_pc ...[2024-07-25 14:39:49.877321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.628 [2024-07-25 14:39:49.894060] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:29.628 [2024-07-25 14:39:49.911499] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.888 passed 00:12:29.888 Test: admin_create_io_qp_max_qps ...[2024-07-25 14:39:49.989016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.830 [2024-07-25 14:39:51.094051] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:31.424 [2024-07-25 14:39:51.480530] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.424 passed 00:12:31.424 Test: admin_create_io_sq_shared_cq ...[2024-07-25 14:39:51.557657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.424 [2024-07-25 14:39:51.690050] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:31.685 [2024-07-25 14:39:51.727120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.685 passed 00:12:31.685 00:12:31.685 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.685 suites 1 1 n/a 0 0 00:12:31.685 tests 18 18 18 0 0 00:12:31.685 asserts 360 360 360 0 n/a 00:12:31.685 00:12:31.685 Elapsed time = 1.515 seconds 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2258327 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2258327 ']' 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2258327 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258327 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258327' 00:12:31.685 killing process with pid 2258327 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2258327 00:12:31.685 14:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2258327 00:12:31.945 14:39:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:31.946 00:12:31.946 real 0m6.157s 00:12:31.946 user 0m17.590s 00:12:31.946 sys 0m0.451s 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.946 ************************************ 00:12:31.946 END TEST nvmf_vfio_user_nvme_compliance 00:12:31.946 ************************************ 00:12:31.946 14:39:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.946 14:39:52 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.946 14:39:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.946 14:39:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.946 14:39:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.946 ************************************ 00:12:31.946 START TEST nvmf_vfio_user_fuzz 00:12:31.946 ************************************ 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.946 * Looking for test storage... 00:12:31.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2259440 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2259440' 00:12:31.946 Process pid: 2259440 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2259440 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2259440 ']' 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.946 14:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.887 14:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.887 14:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:32.887 14:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.826 malloc0 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.826 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:34.086 14:39:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:06.203 Fuzzing completed. Shutting down the fuzz application 00:13:06.203 00:13:06.203 Dumping successful admin opcodes: 00:13:06.203 8, 9, 10, 24, 00:13:06.203 Dumping successful io opcodes: 00:13:06.203 0, 00:13:06.203 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1132463, total successful commands: 4458, random_seed: 3363219456 00:13:06.203 NS: 0x200003a1ef00 admin qp, Total commands completed: 279905, total successful commands: 2256, random_seed: 2846282304 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2259440 ']' 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259440' 00:13:06.203 killing process with pid 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2259440 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:06.203 00:13:06.203 real 0m32.760s 00:13:06.203 user 0m35.406s 00:13:06.203 sys 0m26.529s 00:13:06.203 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.204 14:40:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:06.204 ************************************ 00:13:06.204 END TEST nvmf_vfio_user_fuzz 00:13:06.204 ************************************ 00:13:06.204 14:40:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:06.204 14:40:24 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:06.204 14:40:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.204 14:40:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.204 14:40:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.204 ************************************ 00:13:06.204 START TEST nvmf_host_management 00:13:06.204 ************************************ 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:06.204 * Looking for test storage... 00:13:06.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.204 14:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.204 14:40:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:10.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:10.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:10.408 Found net devices under 0000:86:00.0: cvl_0_0 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:10.408 Found net devices under 0000:86:00.1: cvl_0_1 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.408 14:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.408 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.408 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.408 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.408 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:10.409 00:13:10.409 --- 10.0.0.2 ping statistics --- 00:13:10.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.409 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:13:10.409 00:13:10.409 --- 10.0.0.1 ping statistics --- 00:13:10.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.409 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2267838 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2267838 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2267838 ']' 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.409 14:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.409 [2024-07-25 14:40:30.314048] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:10.409 [2024-07-25 14:40:30.314098] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.409 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.409 [2024-07-25 14:40:30.371934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.409 [2024-07-25 14:40:30.453257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.409 [2024-07-25 14:40:30.453291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.409 [2024-07-25 14:40:30.453298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.409 [2024-07-25 14:40:30.453304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.409 [2024-07-25 14:40:30.453309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.409 [2024-07-25 14:40:30.453350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.409 [2024-07-25 14:40:30.453434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.409 [2024-07-25 14:40:30.453544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.409 [2024-07-25 14:40:30.453545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.980 [2024-07-25 14:40:31.155052] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.980 Malloc0 00:13:10.980 [2024-07-25 14:40:31.214806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.980 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2267904 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2267904 /var/tmp/bdevperf.sock 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2267904 ']' 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:10.981 { 00:13:10.981 "params": { 00:13:10.981 "name": "Nvme$subsystem", 00:13:10.981 "trtype": "$TEST_TRANSPORT", 00:13:10.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:10.981 "adrfam": "ipv4", 00:13:10.981 "trsvcid": "$NVMF_PORT", 00:13:10.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:10.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:10.981 "hdgst": ${hdgst:-false}, 00:13:10.981 "ddgst": ${ddgst:-false} 00:13:10.981 }, 00:13:10.981 "method": "bdev_nvme_attach_controller" 00:13:10.981 } 00:13:10.981 EOF 00:13:10.981 )") 00:13:10.981 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:11.242 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:11.242 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:11.242 14:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.242 "params": { 00:13:11.242 "name": "Nvme0", 00:13:11.242 "trtype": "tcp", 00:13:11.242 "traddr": "10.0.0.2", 00:13:11.242 "adrfam": "ipv4", 00:13:11.242 "trsvcid": "4420", 00:13:11.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:11.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:11.242 "hdgst": false, 00:13:11.242 "ddgst": false 00:13:11.242 }, 00:13:11.242 "method": "bdev_nvme_attach_controller" 00:13:11.242 }' 00:13:11.242 [2024-07-25 14:40:31.309226] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:11.242 [2024-07-25 14:40:31.309283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267904 ] 00:13:11.242 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.242 [2024-07-25 14:40:31.365104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.242 [2024-07-25 14:40:31.439314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.502 Running I/O for 10 seconds... 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.074 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 [2024-07-25 14:40:32.209704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.074 [2024-07-25 14:40:32.209738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.209748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.074 [2024-07-25 14:40:32.209755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.209762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.074 [2024-07-25 14:40:32.209768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.209776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.074 [2024-07-25 14:40:32.209782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.209788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb00980 is same with the state(5) to be set 00:13:12.074 [2024-07-25 14:40:32.210221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.074 [2024-07-25 14:40:32.210556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.074 [2024-07-25 14:40:32.210565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.210990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.075 [2024-07-25 14:40:32.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.075 [2024-07-25 14:40:32.211180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.076 [2024-07-25 14:40:32.211186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.076 [2024-07-25 14:40:32.211195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.076 [2024-07-25 14:40:32.211203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.076 [2024-07-25 14:40:32.211211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.076 [2024-07-25 14:40:32.211218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.076 [2024-07-25 14:40:32.211226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:12.076 [2024-07-25 14:40:32.211233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.076 [2024-07-25 14:40:32.211295] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf32990 was disconnected and freed. reset controller. 00:13:12.076 [2024-07-25 14:40:32.212188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:12.076 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:12.076 00:13:12.076 Latency(us) 00:13:12.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:12.076 Job: Nvme0n1 ended in about 0.61 seconds with error 00:13:12.076 Verification LBA range: start 0x0 length 0x400 00:13:12.076 Nvme0n1 : 0.61 946.01 59.13 105.11 0.00 59860.90 1360.58 65649.98 00:13:12.076 =================================================================================================================== 00:13:12.076 Total : 946.01 59.13 105.11 0.00 59860.90 1360.58 65649.98 00:13:12.076 [2024-07-25 14:40:32.213792] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.076 [2024-07-25 14:40:32.213806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00980 (9): Bad file descriptor 00:13:12.076 14:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.076 14:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:12.076 [2024-07-25 14:40:32.266320] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2267904 00:13:13.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2267904) - No such process 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:13.014 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:13.014 { 00:13:13.015 "params": { 00:13:13.015 "name": "Nvme$subsystem", 00:13:13.015 "trtype": "$TEST_TRANSPORT", 00:13:13.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.015 "adrfam": "ipv4", 00:13:13.015 "trsvcid": "$NVMF_PORT", 00:13:13.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.015 "hdgst": ${hdgst:-false}, 00:13:13.015 "ddgst": ${ddgst:-false} 00:13:13.015 }, 00:13:13.015 "method": "bdev_nvme_attach_controller" 00:13:13.015 } 00:13:13.015 EOF 00:13:13.015 )") 00:13:13.015 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:13.015 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:13.015 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:13.015 14:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:13.015 "params": { 00:13:13.015 "name": "Nvme0", 00:13:13.015 "trtype": "tcp", 00:13:13.015 "traddr": "10.0.0.2", 00:13:13.015 "adrfam": "ipv4", 00:13:13.015 "trsvcid": "4420", 00:13:13.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:13.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:13.015 "hdgst": false, 00:13:13.015 "ddgst": false 00:13:13.015 }, 00:13:13.015 "method": "bdev_nvme_attach_controller" 00:13:13.015 }' 00:13:13.015 [2024-07-25 14:40:33.266425] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:13.015 [2024-07-25 14:40:33.266472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268358 ] 00:13:13.015 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.287 [2024-07-25 14:40:33.320661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.287 [2024-07-25 14:40:33.391808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.287 Running I/O for 1 seconds... 00:13:14.670 00:13:14.670 Latency(us) 00:13:14.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:14.670 Verification LBA range: start 0x0 length 0x400 00:13:14.670 Nvme0n1 : 1.08 951.16 59.45 0.00 0.00 63949.63 17552.25 69753.10 00:13:14.670 =================================================================================================================== 00:13:14.670 Total : 951.16 59.45 0.00 0.00 63949.63 17552.25 69753.10 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:14.670 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.671 rmmod nvme_tcp 00:13:14.671 rmmod nvme_fabrics 00:13:14.671 rmmod nvme_keyring 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2267838 ']' 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2267838 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2267838 ']' 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2267838 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2267838 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2267838' 00:13:14.671 killing process with pid 2267838 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2267838 00:13:14.671 14:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2267838 00:13:14.930 [2024-07-25 14:40:35.121554] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.930 14:40:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.537 14:40:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.537 14:40:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:17.537 00:13:17.537 real 0m12.302s 00:13:17.537 user 0m22.839s 00:13:17.537 sys 0m4.921s 00:13:17.537 14:40:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.537 14:40:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:17.537 ************************************ 00:13:17.537 END TEST nvmf_host_management 00:13:17.537 ************************************ 00:13:17.537 14:40:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.537 14:40:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:17.537 14:40:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.537 14:40:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.537 14:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.537 ************************************ 00:13:17.537 START TEST nvmf_lvol 00:13:17.537 ************************************ 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:17.537 * Looking for test storage... 00:13:17.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.537 14:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.817 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.817 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.817 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.817 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:22.817 00:13:22.817 --- 10.0.0.2 ping statistics --- 00:13:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.817 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:13:22.817 00:13:22.817 --- 10.0.0.1 ping statistics --- 00:13:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.817 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.817 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2272112 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2272112 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2272112 ']' 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.818 14:40:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.818 [2024-07-25 14:40:42.975618] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:22.818 [2024-07-25 14:40:42.975661] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.818 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.818 [2024-07-25 14:40:43.034140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.078 [2024-07-25 14:40:43.115719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.078 [2024-07-25 14:40:43.115755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.078 [2024-07-25 14:40:43.115762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.078 [2024-07-25 14:40:43.115768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.078 [2024-07-25 14:40:43.115773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.078 [2024-07-25 14:40:43.115814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.078 [2024-07-25 14:40:43.115909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.078 [2024-07-25 14:40:43.115909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.646 14:40:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.905 [2024-07-25 14:40:43.977566] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.905 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.166 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:24.166 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.166 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:24.166 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:24.426 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:24.686 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=14013912-c8c5-43ff-aea0-01139294be8f 00:13:24.686 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14013912-c8c5-43ff-aea0-01139294be8f lvol 20 00:13:24.686 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=70c9ef52-999c-489e-b0f3-de746d5604b8 00:13:24.686 14:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:24.946 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 70c9ef52-999c-489e-b0f3-de746d5604b8 00:13:25.205 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:25.205 [2024-07-25 14:40:45.482204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.465 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.465 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2272610 00:13:25.465 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:25.465 14:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:25.465 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.414 14:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 70c9ef52-999c-489e-b0f3-de746d5604b8 MY_SNAPSHOT 00:13:26.677 14:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d3cf65ac-f447-4acd-827d-291e5ce1c1c4 00:13:26.677 14:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 70c9ef52-999c-489e-b0f3-de746d5604b8 30 00:13:26.937 14:40:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d3cf65ac-f447-4acd-827d-291e5ce1c1c4 MY_CLONE 00:13:27.197 14:40:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d98f0059-2796-4447-baf2-fb4d0bdbe074 00:13:27.197 14:40:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d98f0059-2796-4447-baf2-fb4d0bdbe074 00:13:27.764 14:40:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2272610 00:13:35.888 Initializing NVMe Controllers 00:13:35.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:35.888 Controller IO queue size 128, less than required. 00:13:35.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:35.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:35.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:35.888 Initialization complete. Launching workers. 00:13:35.888 ======================================================== 00:13:35.888 Latency(us) 00:13:35.888 Device Information : IOPS MiB/s Average min max 00:13:35.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12010.30 46.92 10660.73 1856.73 79188.00 00:13:35.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11653.20 45.52 10986.08 2971.22 44969.71 00:13:35.888 ======================================================== 00:13:35.888 Total : 23663.50 92.44 10820.95 1856.73 79188.00 00:13:35.888 00:13:35.888 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:36.148 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 70c9ef52-999c-489e-b0f3-de746d5604b8 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14013912-c8c5-43ff-aea0-01139294be8f 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.408 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.408 rmmod nvme_tcp 00:13:36.408 rmmod nvme_fabrics 00:13:36.668 rmmod nvme_keyring 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2272112 ']' 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2272112 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2272112 ']' 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2272112 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272112 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272112' 00:13:36.668 killing process with pid 2272112 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2272112 00:13:36.668 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2272112 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.929 14:40:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.839 14:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.839 00:13:38.839 real 0m21.775s 00:13:38.839 user 1m4.162s 00:13:38.839 sys 0m6.918s 00:13:38.839 14:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.839 14:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:38.839 ************************************ 00:13:38.839 END TEST nvmf_lvol 00:13:38.839 ************************************ 00:13:38.839 14:40:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.839 14:40:59 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:38.839 14:40:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.839 14:40:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.839 14:40:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.839 ************************************ 00:13:38.839 START TEST nvmf_lvs_grow 00:13:38.839 ************************************ 00:13:38.839 14:40:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:39.099 * Looking for test storage... 00:13:39.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.099 14:40:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.100 14:40:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:44.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.448 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:44.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:44.449 Found net devices under 0000:86:00.0: cvl_0_0 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:44.449 Found net devices under 0000:86:00.1: cvl_0_1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:13:44.449 00:13:44.449 --- 10.0.0.2 ping statistics --- 00:13:44.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.449 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:13:44.449 00:13:44.449 --- 10.0.0.1 ping statistics --- 00:13:44.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.449 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2277769 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2277769 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2277769 ']' 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.449 14:41:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.449 [2024-07-25 14:41:04.628554] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:44.449 [2024-07-25 14:41:04.628598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.449 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.449 [2024-07-25 14:41:04.685886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.710 [2024-07-25 14:41:04.765754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.710 [2024-07-25 14:41:04.765789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.710 [2024-07-25 14:41:04.765796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.710 [2024-07-25 14:41:04.765802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.710 [2024-07-25 14:41:04.765807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.710 [2024-07-25 14:41:04.765849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.282 14:41:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.542 [2024-07-25 14:41:05.608598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.542 ************************************ 00:13:45.542 START TEST lvs_grow_clean 00:13:45.542 ************************************ 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.542 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.803 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:45.804 14:41:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:45.804 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:45.804 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:45.804 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:46.065 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:46.065 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:46.065 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a296a938-9271-4cdd-92cb-1cdcff8d5952 lvol 150 00:13:46.326 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 00:13:46.326 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:46.326 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:46.326 [2024-07-25 14:41:06.548595] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:46.326 [2024-07-25 14:41:06.548645] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:46.326 true 00:13:46.326 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:46.326 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:46.586 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:46.586 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:46.847 14:41:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 00:13:46.847 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:47.107 [2024-07-25 14:41:07.218596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2278264 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2278264 /var/tmp/bdevperf.sock 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2278264 ']' 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.107 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:47.368 [2024-07-25 14:41:07.426503] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:13:47.368 [2024-07-25 14:41:07.426550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278264 ] 00:13:47.368 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.368 [2024-07-25 14:41:07.478378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.368 [2024-07-25 14:41:07.551065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.368 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.368 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:47.368 14:41:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:47.939 Nvme0n1 00:13:47.939 14:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:47.939 [ 00:13:47.939 { 00:13:47.939 "name": "Nvme0n1", 00:13:47.939 "aliases": [ 00:13:47.939 "a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6" 00:13:47.939 ], 00:13:47.939 "product_name": "NVMe disk", 00:13:47.939 "block_size": 4096, 00:13:47.939 "num_blocks": 38912, 00:13:47.939 "uuid": "a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6", 00:13:47.939 "assigned_rate_limits": { 00:13:47.939 "rw_ios_per_sec": 0, 00:13:47.939 "rw_mbytes_per_sec": 0, 00:13:47.939 "r_mbytes_per_sec": 0, 00:13:47.939 "w_mbytes_per_sec": 0 00:13:47.939 }, 00:13:47.939 "claimed": false, 00:13:47.939 "zoned": false, 00:13:47.939 "supported_io_types": { 00:13:47.939 "read": true, 00:13:47.939 "write": true, 00:13:47.939 "unmap": true, 00:13:47.939 "flush": true, 00:13:47.939 "reset": true, 00:13:47.939 "nvme_admin": true, 00:13:47.939 "nvme_io": true, 00:13:47.939 "nvme_io_md": false, 00:13:47.939 "write_zeroes": true, 00:13:47.939 "zcopy": false, 00:13:47.939 "get_zone_info": false, 00:13:47.939 "zone_management": false, 00:13:47.939 "zone_append": false, 00:13:47.939 "compare": true, 00:13:47.939 "compare_and_write": true, 00:13:47.939 "abort": true, 00:13:47.939 "seek_hole": false, 00:13:47.939 "seek_data": false, 00:13:47.939 "copy": true, 00:13:47.939 "nvme_iov_md": false 00:13:47.939 }, 00:13:47.939 "memory_domains": [ 00:13:47.939 { 00:13:47.939 "dma_device_id": "system", 00:13:47.939 "dma_device_type": 1 00:13:47.939 } 00:13:47.939 ], 00:13:47.939 "driver_specific": { 00:13:47.939 "nvme": [ 00:13:47.939 { 00:13:47.939 "trid": { 00:13:47.939 "trtype": "TCP", 00:13:47.939 "adrfam": "IPv4", 00:13:47.939 "traddr": "10.0.0.2", 00:13:47.939 "trsvcid": "4420", 00:13:47.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:47.939 }, 00:13:47.939 "ctrlr_data": { 00:13:47.939 "cntlid": 1, 00:13:47.939 "vendor_id": "0x8086", 00:13:47.939 "model_number": "SPDK bdev Controller", 00:13:47.939 "serial_number": "SPDK0", 00:13:47.939 "firmware_revision": "24.09", 00:13:47.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.939 "oacs": { 00:13:47.939 "security": 0, 00:13:47.939 "format": 0, 00:13:47.939 "firmware": 0, 00:13:47.939 "ns_manage": 0 00:13:47.939 }, 00:13:47.939 "multi_ctrlr": true, 00:13:47.939 "ana_reporting": false 00:13:47.939 }, 00:13:47.939 "vs": { 00:13:47.939 "nvme_version": "1.3" 00:13:47.939 }, 00:13:47.939 "ns_data": { 00:13:47.939 "id": 1, 00:13:47.939 "can_share": true 00:13:47.939 } 00:13:47.939 } 00:13:47.939 ], 00:13:47.939 "mp_policy": "active_passive" 00:13:47.939 } 00:13:47.939 } 00:13:47.939 ] 00:13:47.939 14:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2278494 00:13:47.939 14:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:47.939 14:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.199 Running I/O for 10 seconds... 00:13:49.135 Latency(us) 00:13:49.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.135 Nvme0n1 : 1.00 21149.00 82.61 0.00 0.00 0.00 0.00 0.00 00:13:49.135 =================================================================================================================== 00:13:49.135 Total : 21149.00 82.61 0.00 0.00 0.00 0.00 0.00 00:13:49.135 00:13:50.073 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:50.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.073 Nvme0n1 : 2.00 21490.50 83.95 0.00 0.00 0.00 0.00 0.00 00:13:50.073 =================================================================================================================== 00:13:50.073 Total : 21490.50 83.95 0.00 0.00 0.00 0.00 0.00 00:13:50.073 00:13:50.333 true 00:13:50.333 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:50.333 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:50.333 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:50.333 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:50.333 14:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2278494 00:13:51.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.271 Nvme0n1 : 3.00 21904.33 85.56 0.00 0.00 0.00 0.00 0.00 00:13:51.271 =================================================================================================================== 00:13:51.271 Total : 21904.33 85.56 0.00 0.00 0.00 0.00 0.00 00:13:51.271 00:13:52.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.209 Nvme0n1 : 4.00 21935.50 85.69 0.00 0.00 0.00 0.00 0.00 00:13:52.209 =================================================================================================================== 00:13:52.209 Total : 21935.50 85.69 0.00 0.00 0.00 0.00 0.00 00:13:52.209 00:13:53.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.160 Nvme0n1 : 5.00 22054.80 86.15 0.00 0.00 0.00 0.00 0.00 00:13:53.160 =================================================================================================================== 00:13:53.160 Total : 22054.80 86.15 0.00 0.00 0.00 0.00 0.00 00:13:53.160 00:13:54.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.097 Nvme0n1 : 6.00 22079.17 86.25 0.00 0.00 0.00 0.00 0.00 00:13:54.097 =================================================================================================================== 00:13:54.097 Total : 22079.17 86.25 0.00 0.00 0.00 0.00 0.00 00:13:54.097 00:13:55.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.034 Nvme0n1 : 7.00 22130.29 86.45 0.00 0.00 0.00 0.00 0.00 00:13:55.034 =================================================================================================================== 00:13:55.034 Total : 22130.29 86.45 0.00 0.00 0.00 0.00 0.00 00:13:55.034 00:13:56.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.413 Nvme0n1 : 8.00 22127.12 86.43 0.00 0.00 0.00 0.00 0.00 00:13:56.413 =================================================================================================================== 00:13:56.413 Total : 22127.12 86.43 0.00 0.00 0.00 0.00 0.00 00:13:56.413 00:13:57.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.353 Nvme0n1 : 9.00 22091.11 86.29 0.00 0.00 0.00 0.00 0.00 00:13:57.353 =================================================================================================================== 00:13:57.353 Total : 22091.11 86.29 0.00 0.00 0.00 0.00 0.00 00:13:57.353 00:13:58.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.290 Nvme0n1 : 10.00 22113.10 86.38 0.00 0.00 0.00 0.00 0.00 00:13:58.290 =================================================================================================================== 00:13:58.290 Total : 22113.10 86.38 0.00 0.00 0.00 0.00 0.00 00:13:58.290 00:13:58.290 00:13:58.290 Latency(us) 00:13:58.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.290 Nvme0n1 : 10.01 22112.55 86.38 0.00 0.00 5784.52 2706.92 30773.43 00:13:58.290 =================================================================================================================== 00:13:58.290 Total : 22112.55 86.38 0.00 0.00 5784.52 2706.92 30773.43 00:13:58.290 0 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2278264 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2278264 ']' 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2278264 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.290 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2278264 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2278264' 00:13:58.291 killing process with pid 2278264 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2278264 00:13:58.291 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.291 00:13:58.291 Latency(us) 00:13:58.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.291 =================================================================================================================== 00:13:58.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2278264 00:13:58.291 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.585 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:58.883 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:58.883 14:41:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:58.883 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:58.883 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:58.883 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.143 [2024-07-25 14:41:19.240473] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:59.143 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:59.404 request: 00:13:59.404 { 00:13:59.404 "uuid": "a296a938-9271-4cdd-92cb-1cdcff8d5952", 00:13:59.404 "method": "bdev_lvol_get_lvstores", 00:13:59.404 "req_id": 1 00:13:59.404 } 00:13:59.404 Got JSON-RPC error response 00:13:59.404 response: 00:13:59.404 { 00:13:59.404 "code": -19, 00:13:59.404 "message": "No such device" 00:13:59.404 } 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.404 aio_bdev 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.404 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:59.665 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 -t 2000 00:13:59.925 [ 00:13:59.925 { 00:13:59.925 "name": "a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6", 00:13:59.925 "aliases": [ 00:13:59.925 "lvs/lvol" 00:13:59.925 ], 00:13:59.925 "product_name": "Logical Volume", 00:13:59.925 "block_size": 4096, 00:13:59.925 "num_blocks": 38912, 00:13:59.925 "uuid": "a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6", 00:13:59.925 "assigned_rate_limits": { 00:13:59.925 "rw_ios_per_sec": 0, 00:13:59.925 "rw_mbytes_per_sec": 0, 00:13:59.925 "r_mbytes_per_sec": 0, 00:13:59.925 "w_mbytes_per_sec": 0 00:13:59.925 }, 00:13:59.925 "claimed": false, 00:13:59.925 "zoned": false, 00:13:59.925 "supported_io_types": { 00:13:59.925 "read": true, 00:13:59.925 "write": true, 00:13:59.925 "unmap": true, 00:13:59.925 "flush": false, 00:13:59.925 "reset": true, 00:13:59.925 "nvme_admin": false, 00:13:59.925 "nvme_io": false, 00:13:59.925 "nvme_io_md": false, 00:13:59.925 "write_zeroes": true, 00:13:59.925 "zcopy": false, 00:13:59.925 "get_zone_info": false, 00:13:59.925 "zone_management": false, 00:13:59.925 "zone_append": false, 00:13:59.925 "compare": false, 00:13:59.925 "compare_and_write": false, 00:13:59.925 "abort": false, 00:13:59.925 "seek_hole": true, 00:13:59.925 "seek_data": true, 00:13:59.925 "copy": false, 00:13:59.925 "nvme_iov_md": false 00:13:59.925 }, 00:13:59.925 "driver_specific": { 00:13:59.925 "lvol": { 00:13:59.925 "lvol_store_uuid": "a296a938-9271-4cdd-92cb-1cdcff8d5952", 00:13:59.925 "base_bdev": "aio_bdev", 00:13:59.925 "thin_provision": false, 00:13:59.925 "num_allocated_clusters": 38, 00:13:59.925 "snapshot": false, 00:13:59.925 "clone": false, 00:13:59.925 "esnap_clone": false 00:13:59.925 } 00:13:59.925 } 00:13:59.925 } 00:13:59.925 ] 00:13:59.925 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:59.925 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:59.925 14:41:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:59.925 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:59.925 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:13:59.925 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:00.186 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:00.186 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2cfaa7d-3c93-4f1e-a752-a5599d9d39e6 00:14:00.445 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a296a938-9271-4cdd-92cb-1cdcff8d5952 00:14:00.445 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.705 00:14:00.705 real 0m15.196s 00:14:00.705 user 0m14.702s 00:14:00.705 sys 0m1.488s 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:00.705 ************************************ 00:14:00.705 END TEST lvs_grow_clean 00:14:00.705 ************************************ 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:00.705 ************************************ 00:14:00.705 START TEST lvs_grow_dirty 00:14:00.705 ************************************ 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.705 14:41:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:00.965 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:00.965 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:01.225 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 726a49ac-7261-47de-a6bd-332bd4f33c59 lvol 150 00:14:01.485 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8385b767-0873-4af3-9507-b028bda127fa 00:14:01.485 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.485 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:01.485 [2024-07-25 14:41:21.767604] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:01.485 [2024-07-25 14:41:21.767653] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:01.485 true 00:14:01.745 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:01.745 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:01.745 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:01.745 14:41:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:02.005 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8385b767-0873-4af3-9507-b028bda127fa 00:14:02.265 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:02.265 [2024-07-25 14:41:22.453629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.265 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2280855 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2280855 /var/tmp/bdevperf.sock 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2280855 ']' 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.526 14:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.526 [2024-07-25 14:41:22.659067] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:02.526 [2024-07-25 14:41:22.659113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280855 ] 00:14:02.526 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.526 [2024-07-25 14:41:22.711050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.526 [2024-07-25 14:41:22.790278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.465 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.465 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:03.465 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:03.465 Nvme0n1 00:14:03.465 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:03.724 [ 00:14:03.724 { 00:14:03.724 "name": "Nvme0n1", 00:14:03.724 "aliases": [ 00:14:03.724 "8385b767-0873-4af3-9507-b028bda127fa" 00:14:03.724 ], 00:14:03.724 "product_name": "NVMe disk", 00:14:03.724 "block_size": 4096, 00:14:03.724 "num_blocks": 38912, 00:14:03.724 "uuid": "8385b767-0873-4af3-9507-b028bda127fa", 00:14:03.724 "assigned_rate_limits": { 00:14:03.724 "rw_ios_per_sec": 0, 00:14:03.724 "rw_mbytes_per_sec": 0, 00:14:03.725 "r_mbytes_per_sec": 0, 00:14:03.725 "w_mbytes_per_sec": 0 00:14:03.725 }, 00:14:03.725 "claimed": false, 00:14:03.725 "zoned": false, 00:14:03.725 "supported_io_types": { 00:14:03.725 "read": true, 00:14:03.725 "write": true, 00:14:03.725 "unmap": true, 00:14:03.725 "flush": true, 00:14:03.725 "reset": true, 00:14:03.725 "nvme_admin": true, 00:14:03.725 "nvme_io": true, 00:14:03.725 "nvme_io_md": false, 00:14:03.725 "write_zeroes": true, 00:14:03.725 "zcopy": false, 00:14:03.725 "get_zone_info": false, 00:14:03.725 "zone_management": false, 00:14:03.725 "zone_append": false, 00:14:03.725 "compare": true, 00:14:03.725 "compare_and_write": true, 00:14:03.725 "abort": true, 00:14:03.725 "seek_hole": false, 00:14:03.725 "seek_data": false, 00:14:03.725 "copy": true, 00:14:03.725 "nvme_iov_md": false 00:14:03.725 }, 00:14:03.725 "memory_domains": [ 00:14:03.725 { 00:14:03.725 "dma_device_id": "system", 00:14:03.725 "dma_device_type": 1 00:14:03.725 } 00:14:03.725 ], 00:14:03.725 "driver_specific": { 00:14:03.725 "nvme": [ 00:14:03.725 { 00:14:03.725 "trid": { 00:14:03.725 "trtype": "TCP", 00:14:03.725 "adrfam": "IPv4", 00:14:03.725 "traddr": "10.0.0.2", 00:14:03.725 "trsvcid": "4420", 00:14:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:03.725 }, 00:14:03.725 "ctrlr_data": { 00:14:03.725 "cntlid": 1, 00:14:03.725 "vendor_id": "0x8086", 00:14:03.725 "model_number": "SPDK bdev Controller", 00:14:03.725 "serial_number": "SPDK0", 00:14:03.725 "firmware_revision": "24.09", 00:14:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.725 "oacs": { 00:14:03.725 "security": 0, 00:14:03.725 "format": 0, 00:14:03.725 "firmware": 0, 00:14:03.725 "ns_manage": 0 00:14:03.725 }, 00:14:03.725 "multi_ctrlr": true, 00:14:03.725 "ana_reporting": false 00:14:03.725 }, 00:14:03.725 "vs": { 00:14:03.725 "nvme_version": "1.3" 00:14:03.725 }, 00:14:03.725 "ns_data": { 00:14:03.725 "id": 1, 00:14:03.725 "can_share": true 00:14:03.725 } 00:14:03.725 } 00:14:03.725 ], 00:14:03.725 "mp_policy": "active_passive" 00:14:03.725 } 00:14:03.725 } 00:14:03.725 ] 00:14:03.725 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2281095 00:14:03.725 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:03.725 14:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:03.725 Running I/O for 10 seconds... 00:14:05.100 Latency(us) 00:14:05.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.100 Nvme0n1 : 1.00 21424.00 83.69 0.00 0.00 0.00 0.00 0.00 00:14:05.100 =================================================================================================================== 00:14:05.100 Total : 21424.00 83.69 0.00 0.00 0.00 0.00 0.00 00:14:05.100 00:14:05.668 14:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:05.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.927 Nvme0n1 : 2.00 21690.00 84.73 0.00 0.00 0.00 0.00 0.00 00:14:05.927 =================================================================================================================== 00:14:05.927 Total : 21690.00 84.73 0.00 0.00 0.00 0.00 0.00 00:14:05.927 00:14:05.927 true 00:14:05.927 14:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:05.927 14:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:06.186 14:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:06.186 14:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:06.186 14:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2281095 00:14:06.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.754 Nvme0n1 : 3.00 21721.67 84.85 0.00 0.00 0.00 0.00 0.00 00:14:06.754 =================================================================================================================== 00:14:06.754 Total : 21721.67 84.85 0.00 0.00 0.00 0.00 0.00 00:14:06.754 00:14:08.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.138 Nvme0n1 : 4.00 21906.00 85.57 0.00 0.00 0.00 0.00 0.00 00:14:08.138 =================================================================================================================== 00:14:08.138 Total : 21906.00 85.57 0.00 0.00 0.00 0.00 0.00 00:14:08.138 00:14:08.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.705 Nvme0n1 : 5.00 21885.20 85.49 0.00 0.00 0.00 0.00 0.00 00:14:08.705 =================================================================================================================== 00:14:08.705 Total : 21885.20 85.49 0.00 0.00 0.00 0.00 0.00 00:14:08.705 00:14:10.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.083 Nvme0n1 : 6.00 21931.00 85.67 0.00 0.00 0.00 0.00 0.00 00:14:10.083 =================================================================================================================== 00:14:10.083 Total : 21931.00 85.67 0.00 0.00 0.00 0.00 0.00 00:14:10.083 00:14:11.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.019 Nvme0n1 : 7.00 21961.86 85.79 0.00 0.00 0.00 0.00 0.00 00:14:11.019 =================================================================================================================== 00:14:11.019 Total : 21961.86 85.79 0.00 0.00 0.00 0.00 0.00 00:14:11.019 00:14:11.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.955 Nvme0n1 : 8.00 21959.75 85.78 0.00 0.00 0.00 0.00 0.00 00:14:11.955 =================================================================================================================== 00:14:11.955 Total : 21959.75 85.78 0.00 0.00 0.00 0.00 0.00 00:14:11.955 00:14:12.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.893 Nvme0n1 : 9.00 21987.22 85.89 0.00 0.00 0.00 0.00 0.00 00:14:12.893 =================================================================================================================== 00:14:12.893 Total : 21987.22 85.89 0.00 0.00 0.00 0.00 0.00 00:14:12.893 00:14:13.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.830 Nvme0n1 : 10.00 22006.20 85.96 0.00 0.00 0.00 0.00 0.00 00:14:13.830 =================================================================================================================== 00:14:13.830 Total : 22006.20 85.96 0.00 0.00 0.00 0.00 0.00 00:14:13.830 00:14:13.830 00:14:13.830 Latency(us) 00:14:13.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.830 Nvme0n1 : 10.01 22007.31 85.97 0.00 0.00 5812.77 3006.11 28493.91 00:14:13.830 =================================================================================================================== 00:14:13.830 Total : 22007.31 85.97 0.00 0.00 5812.77 3006.11 28493.91 00:14:13.830 0 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2280855 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2280855 ']' 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2280855 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2280855 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2280855' 00:14:13.830 killing process with pid 2280855 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2280855 00:14:13.830 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.830 00:14:13.830 Latency(us) 00:14:13.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.830 =================================================================================================================== 00:14:13.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.830 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2280855 00:14:14.090 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.350 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:14.350 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:14.350 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2277769 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2277769 00:14:14.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2277769 Killed "${NVMF_APP[@]}" "$@" 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2282927 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2282927 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2282927 ']' 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.628 14:41:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:14.628 [2024-07-25 14:41:34.887173] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:14.628 [2024-07-25 14:41:34.887219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.914 [2024-07-25 14:41:34.947658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.914 [2024-07-25 14:41:35.026296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.914 [2024-07-25 14:41:35.026330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.914 [2024-07-25 14:41:35.026337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.914 [2024-07-25 14:41:35.026343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.914 [2024-07-25 14:41:35.026348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.914 [2024-07-25 14:41:35.026365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.482 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:15.741 [2024-07-25 14:41:35.879816] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:15.741 [2024-07-25 14:41:35.879903] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:15.741 [2024-07-25 14:41:35.879928] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8385b767-0873-4af3-9507-b028bda127fa 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8385b767-0873-4af3-9507-b028bda127fa 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:15.741 14:41:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:16.000 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8385b767-0873-4af3-9507-b028bda127fa -t 2000 00:14:16.000 [ 00:14:16.000 { 00:14:16.000 "name": "8385b767-0873-4af3-9507-b028bda127fa", 00:14:16.000 "aliases": [ 00:14:16.000 "lvs/lvol" 00:14:16.000 ], 00:14:16.000 "product_name": "Logical Volume", 00:14:16.000 "block_size": 4096, 00:14:16.000 "num_blocks": 38912, 00:14:16.000 "uuid": "8385b767-0873-4af3-9507-b028bda127fa", 00:14:16.000 "assigned_rate_limits": { 00:14:16.000 "rw_ios_per_sec": 0, 00:14:16.000 "rw_mbytes_per_sec": 0, 00:14:16.000 "r_mbytes_per_sec": 0, 00:14:16.000 "w_mbytes_per_sec": 0 00:14:16.000 }, 00:14:16.000 "claimed": false, 00:14:16.001 "zoned": false, 00:14:16.001 "supported_io_types": { 00:14:16.001 "read": true, 00:14:16.001 "write": true, 00:14:16.001 "unmap": true, 00:14:16.001 "flush": false, 00:14:16.001 "reset": true, 00:14:16.001 "nvme_admin": false, 00:14:16.001 "nvme_io": false, 00:14:16.001 "nvme_io_md": false, 00:14:16.001 "write_zeroes": true, 00:14:16.001 "zcopy": false, 00:14:16.001 "get_zone_info": false, 00:14:16.001 "zone_management": false, 00:14:16.001 "zone_append": false, 00:14:16.001 "compare": false, 00:14:16.001 "compare_and_write": false, 00:14:16.001 "abort": false, 00:14:16.001 "seek_hole": true, 00:14:16.001 "seek_data": true, 00:14:16.001 "copy": false, 00:14:16.001 "nvme_iov_md": false 00:14:16.001 }, 00:14:16.001 "driver_specific": { 00:14:16.001 "lvol": { 00:14:16.001 "lvol_store_uuid": "726a49ac-7261-47de-a6bd-332bd4f33c59", 00:14:16.001 "base_bdev": "aio_bdev", 00:14:16.001 "thin_provision": false, 00:14:16.001 "num_allocated_clusters": 38, 00:14:16.001 "snapshot": false, 00:14:16.001 "clone": false, 00:14:16.001 "esnap_clone": false 00:14:16.001 } 00:14:16.001 } 00:14:16.001 } 00:14:16.001 ] 00:14:16.001 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:16.001 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:16.001 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:16.260 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:16.260 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:16.261 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:16.520 [2024-07-25 14:41:36.736256] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:16.520 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:16.779 request: 00:14:16.779 { 00:14:16.779 "uuid": "726a49ac-7261-47de-a6bd-332bd4f33c59", 00:14:16.779 "method": "bdev_lvol_get_lvstores", 00:14:16.779 "req_id": 1 00:14:16.779 } 00:14:16.779 Got JSON-RPC error response 00:14:16.779 response: 00:14:16.779 { 00:14:16.779 "code": -19, 00:14:16.779 "message": "No such device" 00:14:16.779 } 00:14:16.779 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:16.779 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.779 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.779 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.779 14:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.039 aio_bdev 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8385b767-0873-4af3-9507-b028bda127fa 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8385b767-0873-4af3-9507-b028bda127fa 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:17.039 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8385b767-0873-4af3-9507-b028bda127fa -t 2000 00:14:17.299 [ 00:14:17.299 { 00:14:17.299 "name": "8385b767-0873-4af3-9507-b028bda127fa", 00:14:17.299 "aliases": [ 00:14:17.299 "lvs/lvol" 00:14:17.299 ], 00:14:17.299 "product_name": "Logical Volume", 00:14:17.299 "block_size": 4096, 00:14:17.299 "num_blocks": 38912, 00:14:17.299 "uuid": "8385b767-0873-4af3-9507-b028bda127fa", 00:14:17.299 "assigned_rate_limits": { 00:14:17.299 "rw_ios_per_sec": 0, 00:14:17.299 "rw_mbytes_per_sec": 0, 00:14:17.299 "r_mbytes_per_sec": 0, 00:14:17.299 "w_mbytes_per_sec": 0 00:14:17.299 }, 00:14:17.299 "claimed": false, 00:14:17.299 "zoned": false, 00:14:17.299 "supported_io_types": { 00:14:17.299 "read": true, 00:14:17.299 "write": true, 00:14:17.299 "unmap": true, 00:14:17.299 "flush": false, 00:14:17.299 "reset": true, 00:14:17.299 "nvme_admin": false, 00:14:17.299 "nvme_io": false, 00:14:17.299 "nvme_io_md": false, 00:14:17.299 "write_zeroes": true, 00:14:17.299 "zcopy": false, 00:14:17.299 "get_zone_info": false, 00:14:17.299 "zone_management": false, 00:14:17.299 "zone_append": false, 00:14:17.299 "compare": false, 00:14:17.299 "compare_and_write": false, 00:14:17.299 "abort": false, 00:14:17.299 "seek_hole": true, 00:14:17.299 "seek_data": true, 00:14:17.299 "copy": false, 00:14:17.299 "nvme_iov_md": false 00:14:17.299 }, 00:14:17.299 "driver_specific": { 00:14:17.299 "lvol": { 00:14:17.299 "lvol_store_uuid": "726a49ac-7261-47de-a6bd-332bd4f33c59", 00:14:17.299 "base_bdev": "aio_bdev", 00:14:17.299 "thin_provision": false, 00:14:17.299 "num_allocated_clusters": 38, 00:14:17.299 "snapshot": false, 00:14:17.299 "clone": false, 00:14:17.299 "esnap_clone": false 00:14:17.299 } 00:14:17.299 } 00:14:17.299 } 00:14:17.299 ] 00:14:17.299 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:17.299 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:17.299 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:17.558 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:17.558 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:17.558 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:17.558 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:17.558 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8385b767-0873-4af3-9507-b028bda127fa 00:14:17.818 14:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 726a49ac-7261-47de-a6bd-332bd4f33c59 00:14:18.078 14:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:18.078 14:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:18.078 00:14:18.078 real 0m17.427s 00:14:18.078 user 0m43.964s 00:14:18.078 sys 0m4.011s 00:14:18.078 14:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.078 14:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:18.078 ************************************ 00:14:18.078 END TEST lvs_grow_dirty 00:14:18.078 ************************************ 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:18.338 nvmf_trace.0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.338 rmmod nvme_tcp 00:14:18.338 rmmod nvme_fabrics 00:14:18.338 rmmod nvme_keyring 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2282927 ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2282927 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2282927 ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2282927 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2282927 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2282927' 00:14:18.338 killing process with pid 2282927 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2282927 00:14:18.338 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2282927 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.598 14:41:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.507 14:41:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.507 00:14:20.507 real 0m41.654s 00:14:20.507 user 1m4.362s 00:14:20.508 sys 0m9.956s 00:14:20.508 14:41:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.508 14:41:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:20.508 ************************************ 00:14:20.508 END TEST nvmf_lvs_grow 00:14:20.508 ************************************ 00:14:20.768 14:41:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.768 14:41:40 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:20.768 14:41:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.768 14:41:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.768 14:41:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.768 ************************************ 00:14:20.768 START TEST nvmf_bdev_io_wait 00:14:20.768 ************************************ 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:20.769 * Looking for test storage... 00:14:20.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.769 14:41:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:26.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:26.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:26.048 Found net devices under 0000:86:00.0: cvl_0_0 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:26.048 Found net devices under 0000:86:00.1: cvl_0_1 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.048 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:14:26.049 00:14:26.049 --- 10.0.0.2 ping statistics --- 00:14:26.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.049 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:14:26.049 00:14:26.049 --- 10.0.0.1 ping statistics --- 00:14:26.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.049 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.049 14:41:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2286976 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2286976 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2286976 ']' 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.049 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.049 [2024-07-25 14:41:46.052931] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:26.049 [2024-07-25 14:41:46.052972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.049 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.049 [2024-07-25 14:41:46.111426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.049 [2024-07-25 14:41:46.192984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.049 [2024-07-25 14:41:46.193022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.049 [2024-07-25 14:41:46.193029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.049 [2024-07-25 14:41:46.193035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.049 [2024-07-25 14:41:46.193040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.049 [2024-07-25 14:41:46.193091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.049 [2024-07-25 14:41:46.193108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.049 [2024-07-25 14:41:46.193174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.049 [2024-07-25 14:41:46.193175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.617 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.618 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:26.618 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.618 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.618 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 [2024-07-25 14:41:46.986621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 Malloc0 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.876 [2024-07-25 14:41:47.049611] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2287071 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2287074 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.876 { 00:14:26.876 "params": { 00:14:26.876 "name": "Nvme$subsystem", 00:14:26.876 "trtype": "$TEST_TRANSPORT", 00:14:26.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.876 "adrfam": "ipv4", 00:14:26.876 "trsvcid": "$NVMF_PORT", 00:14:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.876 "hdgst": ${hdgst:-false}, 00:14:26.876 "ddgst": ${ddgst:-false} 00:14:26.876 }, 00:14:26.876 "method": "bdev_nvme_attach_controller" 00:14:26.876 } 00:14:26.876 EOF 00:14:26.876 )") 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2287077 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.876 { 00:14:26.876 "params": { 00:14:26.876 "name": "Nvme$subsystem", 00:14:26.876 "trtype": "$TEST_TRANSPORT", 00:14:26.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.876 "adrfam": "ipv4", 00:14:26.876 "trsvcid": "$NVMF_PORT", 00:14:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.876 "hdgst": ${hdgst:-false}, 00:14:26.876 "ddgst": ${ddgst:-false} 00:14:26.876 }, 00:14:26.876 "method": "bdev_nvme_attach_controller" 00:14:26.876 } 00:14:26.876 EOF 00:14:26.876 )") 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2287080 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.876 { 00:14:26.876 "params": { 00:14:26.876 "name": "Nvme$subsystem", 00:14:26.876 "trtype": "$TEST_TRANSPORT", 00:14:26.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.876 "adrfam": "ipv4", 00:14:26.876 "trsvcid": "$NVMF_PORT", 00:14:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.876 "hdgst": ${hdgst:-false}, 00:14:26.876 "ddgst": ${ddgst:-false} 00:14:26.876 }, 00:14:26.876 "method": "bdev_nvme_attach_controller" 00:14:26.876 } 00:14:26.876 EOF 00:14:26.876 )") 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.876 { 00:14:26.876 "params": { 00:14:26.876 "name": "Nvme$subsystem", 00:14:26.876 "trtype": "$TEST_TRANSPORT", 00:14:26.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.876 "adrfam": "ipv4", 00:14:26.876 "trsvcid": "$NVMF_PORT", 00:14:26.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.876 "hdgst": ${hdgst:-false}, 00:14:26.876 "ddgst": ${ddgst:-false} 00:14:26.876 }, 00:14:26.876 "method": "bdev_nvme_attach_controller" 00:14:26.876 } 00:14:26.876 EOF 00:14:26.876 )") 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2287071 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.876 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.876 "params": { 00:14:26.876 "name": "Nvme1", 00:14:26.876 "trtype": "tcp", 00:14:26.877 "traddr": "10.0.0.2", 00:14:26.877 "adrfam": "ipv4", 00:14:26.877 "trsvcid": "4420", 00:14:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.877 "hdgst": false, 00:14:26.877 "ddgst": false 00:14:26.877 }, 00:14:26.877 "method": "bdev_nvme_attach_controller" 00:14:26.877 }' 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.877 "params": { 00:14:26.877 "name": "Nvme1", 00:14:26.877 "trtype": "tcp", 00:14:26.877 "traddr": "10.0.0.2", 00:14:26.877 "adrfam": "ipv4", 00:14:26.877 "trsvcid": "4420", 00:14:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.877 "hdgst": false, 00:14:26.877 "ddgst": false 00:14:26.877 }, 00:14:26.877 "method": "bdev_nvme_attach_controller" 00:14:26.877 }' 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.877 "params": { 00:14:26.877 "name": "Nvme1", 00:14:26.877 "trtype": "tcp", 00:14:26.877 "traddr": "10.0.0.2", 00:14:26.877 "adrfam": "ipv4", 00:14:26.877 "trsvcid": "4420", 00:14:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.877 "hdgst": false, 00:14:26.877 "ddgst": false 00:14:26.877 }, 00:14:26.877 "method": "bdev_nvme_attach_controller" 00:14:26.877 }' 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.877 14:41:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.877 "params": { 00:14:26.877 "name": "Nvme1", 00:14:26.877 "trtype": "tcp", 00:14:26.877 "traddr": "10.0.0.2", 00:14:26.877 "adrfam": "ipv4", 00:14:26.877 "trsvcid": "4420", 00:14:26.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.877 "hdgst": false, 00:14:26.877 "ddgst": false 00:14:26.877 }, 00:14:26.877 "method": "bdev_nvme_attach_controller" 00:14:26.877 }' 00:14:26.877 [2024-07-25 14:41:47.099874] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:26.877 [2024-07-25 14:41:47.099916] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:26.877 [2024-07-25 14:41:47.100985] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:26.877 [2024-07-25 14:41:47.101041] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:26.877 [2024-07-25 14:41:47.102429] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:26.877 [2024-07-25 14:41:47.102477] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:26.877 [2024-07-25 14:41:47.113723] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:26.877 [2024-07-25 14:41:47.113796] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:26.877 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.136 [2024-07-25 14:41:47.290596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.136 [2024-07-25 14:41:47.368482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:27.136 [2024-07-25 14:41:47.389647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.396 [2024-07-25 14:41:47.441486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.396 [2024-07-25 14:41:47.470760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:27.396 [2024-07-25 14:41:47.513574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:27.396 [2024-07-25 14:41:47.542050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.396 [2024-07-25 14:41:47.635017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:27.655 Running I/O for 1 seconds... 00:14:27.655 Running I/O for 1 seconds... 00:14:27.655 Running I/O for 1 seconds... 00:14:27.655 Running I/O for 1 seconds... 00:14:28.593 00:14:28.593 Latency(us) 00:14:28.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.593 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:28.593 Nvme1n1 : 1.01 13072.12 51.06 0.00 0.00 9726.58 2151.29 18464.06 00:14:28.593 =================================================================================================================== 00:14:28.593 Total : 13072.12 51.06 0.00 0.00 9726.58 2151.29 18464.06 00:14:28.593 00:14:28.593 Latency(us) 00:14:28.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.593 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:28.593 Nvme1n1 : 1.01 7726.45 30.18 0.00 0.00 16477.43 5698.78 22339.23 00:14:28.593 =================================================================================================================== 00:14:28.593 Total : 7726.45 30.18 0.00 0.00 16477.43 5698.78 22339.23 00:14:28.593 00:14:28.593 Latency(us) 00:14:28.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.593 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:28.593 Nvme1n1 : 1.00 245210.21 957.85 0.00 0.00 519.97 214.59 669.61 00:14:28.593 =================================================================================================================== 00:14:28.593 Total : 245210.21 957.85 0.00 0.00 519.97 214.59 669.61 00:14:28.852 00:14:28.852 Latency(us) 00:14:28.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.852 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:28.852 Nvme1n1 : 1.01 7850.63 30.67 0.00 0.00 16233.60 4074.63 38295.82 00:14:28.852 =================================================================================================================== 00:14:28.852 Total : 7850.63 30.67 0.00 0.00 16233.60 4074.63 38295.82 00:14:28.852 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2287074 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2287077 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2287080 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.853 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.853 rmmod nvme_tcp 00:14:28.853 rmmod nvme_fabrics 00:14:28.853 rmmod nvme_keyring 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2286976 ']' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2286976 ']' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2286976' 00:14:29.113 killing process with pid 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2286976 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.113 14:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.652 14:41:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.652 00:14:31.652 real 0m10.608s 00:14:31.652 user 0m19.767s 00:14:31.652 sys 0m5.448s 00:14:31.652 14:41:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.652 14:41:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:31.652 ************************************ 00:14:31.652 END TEST nvmf_bdev_io_wait 00:14:31.652 ************************************ 00:14:31.652 14:41:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.652 14:41:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.652 14:41:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.652 14:41:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.652 14:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.652 ************************************ 00:14:31.652 START TEST nvmf_queue_depth 00:14:31.652 ************************************ 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.652 * Looking for test storage... 00:14:31.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.652 14:41:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.992 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.993 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.993 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:36.993 00:14:36.993 --- 10.0.0.2 ping statistics --- 00:14:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.993 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:14:36.993 00:14:36.993 --- 10.0.0.1 ping statistics --- 00:14:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.993 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2290820 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2290820 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2290820 ']' 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.993 14:41:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.993 [2024-07-25 14:41:56.845697] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:36.993 [2024-07-25 14:41:56.845742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.993 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.993 [2024-07-25 14:41:56.903663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.993 [2024-07-25 14:41:56.981196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.993 [2024-07-25 14:41:56.981229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.993 [2024-07-25 14:41:56.981236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.993 [2024-07-25 14:41:56.981242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.993 [2024-07-25 14:41:56.981247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.993 [2024-07-25 14:41:56.981265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.557 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 [2024-07-25 14:41:57.684202] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 Malloc0 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 [2024-07-25 14:41:57.743007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2291042 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2291042 /var/tmp/bdevperf.sock 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2291042 ']' 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.558 14:41:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.558 [2024-07-25 14:41:57.792201] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:14:37.558 [2024-07-25 14:41:57.792242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291042 ] 00:14:37.558 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.558 [2024-07-25 14:41:57.845265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.816 [2024-07-25 14:41:57.920382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.384 14:41:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.384 14:41:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:38.384 14:41:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.384 14:41:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.384 14:41:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 NVMe0n1 00:14:38.643 14:41:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.643 14:41:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.643 Running I/O for 10 seconds... 00:14:50.847 00:14:50.847 Latency(us) 00:14:50.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.847 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:50.847 Verification LBA range: start 0x0 length 0x4000 00:14:50.847 NVMe0n1 : 10.07 11704.97 45.72 0.00 0.00 87195.50 20401.64 67017.68 00:14:50.847 =================================================================================================================== 00:14:50.847 Total : 11704.97 45.72 0.00 0.00 87195.50 20401.64 67017.68 00:14:50.847 0 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2291042 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2291042 ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2291042 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2291042 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2291042' 00:14:50.847 killing process with pid 2291042 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2291042 00:14:50.847 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.847 00:14:50.847 Latency(us) 00:14:50.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.847 =================================================================================================================== 00:14:50.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2291042 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.847 rmmod nvme_tcp 00:14:50.847 rmmod nvme_fabrics 00:14:50.847 rmmod nvme_keyring 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2290820 ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2290820 ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2290820' 00:14:50.847 killing process with pid 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2290820 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.847 14:42:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.417 14:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.417 00:14:51.417 real 0m20.127s 00:14:51.417 user 0m25.000s 00:14:51.417 sys 0m5.435s 00:14:51.417 14:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.417 14:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.417 ************************************ 00:14:51.417 END TEST nvmf_queue_depth 00:14:51.417 ************************************ 00:14:51.417 14:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.417 14:42:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:51.417 14:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:51.417 14:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.417 14:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.678 ************************************ 00:14:51.678 START TEST nvmf_target_multipath 00:14:51.678 ************************************ 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:51.678 * Looking for test storage... 00:14:51.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.678 14:42:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.965 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.966 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.966 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.966 14:42:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:14:56.966 00:14:56.966 --- 10.0.0.2 ping statistics --- 00:14:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.966 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:14:56.966 00:14:56.966 --- 10.0.0.1 ping statistics --- 00:14:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.966 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:56.966 only one NIC for nvmf test 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.966 rmmod nvme_tcp 00:14:56.966 rmmod nvme_fabrics 00:14:56.966 rmmod nvme_keyring 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.966 14:42:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.509 00:14:59.509 real 0m7.568s 00:14:59.509 user 0m1.618s 00:14:59.509 sys 0m3.963s 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.509 14:42:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:59.509 ************************************ 00:14:59.509 END TEST nvmf_target_multipath 00:14:59.509 ************************************ 00:14:59.509 14:42:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.509 14:42:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:59.509 14:42:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.509 14:42:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.509 14:42:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.509 ************************************ 00:14:59.509 START TEST nvmf_zcopy 00:14:59.509 ************************************ 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:59.509 * Looking for test storage... 00:14:59.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.509 14:42:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.510 14:42:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:04.794 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:04.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:04.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:04.795 Found net devices under 0000:86:00.0: cvl_0_0 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:04.795 Found net devices under 0000:86:00.1: cvl_0_1 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:15:04.795 00:15:04.795 --- 10.0.0.2 ping statistics --- 00:15:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.795 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.460 ms 00:15:04.795 00:15:04.795 --- 10.0.0.1 ping statistics --- 00:15:04.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.795 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.795 14:42:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2300389 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2300389 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2300389 ']' 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.795 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.795 [2024-07-25 14:42:25.043074] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:15:04.795 [2024-07-25 14:42:25.043117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.795 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.056 [2024-07-25 14:42:25.099773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.056 [2024-07-25 14:42:25.178468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.056 [2024-07-25 14:42:25.178505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.056 [2024-07-25 14:42:25.178512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.056 [2024-07-25 14:42:25.178518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.056 [2024-07-25 14:42:25.178523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.056 [2024-07-25 14:42:25.178540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.629 [2024-07-25 14:42:25.897730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.629 [2024-07-25 14:42:25.913860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.629 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.950 malloc0 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.950 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.950 { 00:15:05.950 "params": { 00:15:05.950 "name": "Nvme$subsystem", 00:15:05.950 "trtype": "$TEST_TRANSPORT", 00:15:05.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.950 "adrfam": "ipv4", 00:15:05.950 "trsvcid": "$NVMF_PORT", 00:15:05.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.950 "hdgst": ${hdgst:-false}, 00:15:05.950 "ddgst": ${ddgst:-false} 00:15:05.950 }, 00:15:05.950 "method": "bdev_nvme_attach_controller" 00:15:05.950 } 00:15:05.950 EOF 00:15:05.951 )") 00:15:05.951 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:05.951 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:05.951 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:05.951 14:42:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.951 "params": { 00:15:05.951 "name": "Nvme1", 00:15:05.951 "trtype": "tcp", 00:15:05.951 "traddr": "10.0.0.2", 00:15:05.951 "adrfam": "ipv4", 00:15:05.951 "trsvcid": "4420", 00:15:05.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.951 "hdgst": false, 00:15:05.951 "ddgst": false 00:15:05.951 }, 00:15:05.951 "method": "bdev_nvme_attach_controller" 00:15:05.951 }' 00:15:05.951 [2024-07-25 14:42:25.990743] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:15:05.951 [2024-07-25 14:42:25.990783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300439 ] 00:15:05.951 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.951 [2024-07-25 14:42:26.044435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.951 [2024-07-25 14:42:26.118335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.210 Running I/O for 10 seconds... 00:15:16.195 00:15:16.195 Latency(us) 00:15:16.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.195 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:16.195 Verification LBA range: start 0x0 length 0x1000 00:15:16.195 Nvme1n1 : 10.01 7761.32 60.64 0.00 0.00 16448.31 1396.20 48781.58 00:15:16.195 =================================================================================================================== 00:15:16.195 Total : 7761.32 60.64 0.00 0.00 16448.31 1396.20 48781.58 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2302264 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:16.462 { 00:15:16.462 "params": { 00:15:16.462 "name": "Nvme$subsystem", 00:15:16.462 "trtype": "$TEST_TRANSPORT", 00:15:16.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:16.462 "adrfam": "ipv4", 00:15:16.462 "trsvcid": "$NVMF_PORT", 00:15:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:16.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:16.462 "hdgst": ${hdgst:-false}, 00:15:16.462 "ddgst": ${ddgst:-false} 00:15:16.462 }, 00:15:16.462 "method": "bdev_nvme_attach_controller" 00:15:16.462 } 00:15:16.462 EOF 00:15:16.462 )") 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:16.462 [2024-07-25 14:42:36.511638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.511671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:16.462 14:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:16.462 "params": { 00:15:16.462 "name": "Nvme1", 00:15:16.462 "trtype": "tcp", 00:15:16.462 "traddr": "10.0.0.2", 00:15:16.462 "adrfam": "ipv4", 00:15:16.462 "trsvcid": "4420", 00:15:16.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.462 "hdgst": false, 00:15:16.462 "ddgst": false 00:15:16.462 }, 00:15:16.462 "method": "bdev_nvme_attach_controller" 00:15:16.462 }' 00:15:16.462 [2024-07-25 14:42:36.523641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.523653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.535674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.535683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.547706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.547715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.548717] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:15:16.462 [2024-07-25 14:42:36.548758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302264 ] 00:15:16.462 [2024-07-25 14:42:36.559738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.559748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.462 [2024-07-25 14:42:36.571769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.571778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.583799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.583809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.595832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.595841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.602845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.462 [2024-07-25 14:42:36.607866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.607875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.619901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.619912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.631929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.631938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.643965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.643986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.655993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.656005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.668026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.668035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.678855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.462 [2024-07-25 14:42:36.680061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.680072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.692102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.692121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.704127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.704142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.716157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.716168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.728198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.728209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.740226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.740237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.462 [2024-07-25 14:42:36.752255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.462 [2024-07-25 14:42:36.752264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.764283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.764307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.776327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.776345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.788356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.788369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.800388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.800399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.812418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.812427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.824450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.824459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.836488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.726 [2024-07-25 14:42:36.836501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.726 [2024-07-25 14:42:36.848523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.848537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.860553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.860564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.872594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.872610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 Running I/O for 5 seconds... 00:15:16.727 [2024-07-25 14:42:36.884617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.884626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.906263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.906282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.919943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.919961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.928233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.928250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.943397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.943414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.960786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.960805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.976268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.976287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:36.994147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:36.994166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.727 [2024-07-25 14:42:37.009098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.727 [2024-07-25 14:42:37.009117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.025939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.025957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.043788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.043808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.057710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.057730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.073402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.073420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.083249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.083268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.099747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.099765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.114798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.114816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.130834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.130852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.142589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.142607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.159188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.159207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.173152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.173169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.190008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.190026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.197210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.197228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.214364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.214382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.230300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.230318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.245601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.245620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.259826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.259845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.986 [2024-07-25 14:42:37.272574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.986 [2024-07-25 14:42:37.272594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.287686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.287705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.305107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.305126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.315428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.315447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.332571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.332590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.347593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.347612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.356506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.356524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.371160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.371179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.379858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.379876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.397583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.397601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.406241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.406259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.415660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.415679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.430953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.430973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.439879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.439898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.456087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.456108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.470166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.470186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.487145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.487164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.503319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.503338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.519977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.519995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.247 [2024-07-25 14:42:37.534457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.247 [2024-07-25 14:42:37.534475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.545805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.545824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.562195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.562214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.572300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.572318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.587593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.587611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.604547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.604566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.613706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.613724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.630028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.630049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.646050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.646068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.660933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.660951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.672727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.672745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.681572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.681590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.691522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.691539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.700703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.700720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.710392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.710409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.726586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.726604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.743935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.743953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.752721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.752738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.768737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.768756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.778050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.778068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.507 [2024-07-25 14:42:37.794238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.507 [2024-07-25 14:42:37.794256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.802680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.802699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.818631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.818650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.836081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.836099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.852041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.852065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.868551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.868569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.885114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.885132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.901131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.901149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.911874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.911892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.927072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.927090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.942278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.942297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.956990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.957008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.968406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.968424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.978258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.978275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:37.987190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:37.987207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:38.001675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:38.001693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.767 [2024-07-25 14:42:38.010202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.767 [2024-07-25 14:42:38.010220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.768 [2024-07-25 14:42:38.024721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.768 [2024-07-25 14:42:38.024739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.768 [2024-07-25 14:42:38.035444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.768 [2024-07-25 14:42:38.035461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.768 [2024-07-25 14:42:38.044376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.768 [2024-07-25 14:42:38.044399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.768 [2024-07-25 14:42:38.058748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.768 [2024-07-25 14:42:38.058770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.072422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.072442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.087511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.087530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.098902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.098921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.108150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.108167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.123628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.123646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.133149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.133168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.140481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.140498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.150510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.150528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.159668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.159686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.174465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.174483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.192057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.192076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.205521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.205538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.214689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.214708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.223255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.223273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.235924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.235941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.249815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.249833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.257247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.257265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.269825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.269847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.286405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.286423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.302666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.302684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.028 [2024-07-25 14:42:38.313890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.028 [2024-07-25 14:42:38.313909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.321561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.321579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.337577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.337595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.347679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.347696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.356473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.356491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.371190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.371208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.387709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.387726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.405423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.405441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.423156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.423175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.438656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.438674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.454262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.454279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.464551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.464569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.479670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.479688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.495911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.495930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.512998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.513016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.522297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.522315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.530386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.530407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.539303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.539320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.554930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.554947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.288 [2024-07-25 14:42:38.572032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.288 [2024-07-25 14:42:38.572054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.586869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.586887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.598436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.598454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.612940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.612959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.627496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.627515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.642298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.642317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.653501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.653521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.662556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.662575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.669906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.669923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.684890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.684908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.697923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.697942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.711725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.711743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.726487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.726505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.737176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.548 [2024-07-25 14:42:38.737194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.548 [2024-07-25 14:42:38.751850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.751869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.762846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.762865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.777221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.777244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.788728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.788747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.804097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.804117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.821195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.821214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.549 [2024-07-25 14:42:38.836854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.549 [2024-07-25 14:42:38.836874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.848398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.848417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.862947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.862965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.876833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.876852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.890428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.890447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.904986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.905005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.911986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.912004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.924730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.924749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.933530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.933548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.948395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.948414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.963224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.963243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.977125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.977145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:38.988419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:38.988437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:39.002983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:39.003003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:39.013867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.808 [2024-07-25 14:42:39.013885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.808 [2024-07-25 14:42:39.023430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.023448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.809 [2024-07-25 14:42:39.031976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.031994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.809 [2024-07-25 14:42:39.041018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.041036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.809 [2024-07-25 14:42:39.056353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.056371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.809 [2024-07-25 14:42:39.073155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.073173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.809 [2024-07-25 14:42:39.090037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.809 [2024-07-25 14:42:39.090062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.107604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.107625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.116664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.116682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.131713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.131731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.148604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.148623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.166481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.166500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.180312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.180330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.192009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.192027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.201439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.201457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.217585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.217603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.227721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.227739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.242658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.242675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.258799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.258816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.274742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.274760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.291297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.291315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.308679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.308697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.325116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.325134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.341352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.341370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.069 [2024-07-25 14:42:39.351662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.069 [2024-07-25 14:42:39.351679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.362046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.362065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.378470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.378489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.392890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.392909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.408512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.408531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.425521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.425539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.433493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.433510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.448201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.448220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.462533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.462551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.473325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.473342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.487737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.487755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.501986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.502004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.512574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.512592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.521572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.521589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.531673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.531691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.541139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.541158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.551138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.551156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.566098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.566116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.574920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.574938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.329 [2024-07-25 14:42:39.592468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.329 [2024-07-25 14:42:39.592487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.330 [2024-07-25 14:42:39.607586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.330 [2024-07-25 14:42:39.607604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.330 [2024-07-25 14:42:39.617749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.330 [2024-07-25 14:42:39.617767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.633416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.633435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.643682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.643699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.659923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.659941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.669905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.669922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.679152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.679170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.693600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.693618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.701989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.702006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.714537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.714554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.730117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.730135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.739264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.739281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.748568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.748586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.763626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.763644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.774181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.774199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.782890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.782908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.797288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.797307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.809225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.809243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.823650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.823668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.836064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.836083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.851259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.851278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.590 [2024-07-25 14:42:39.868519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.590 [2024-07-25 14:42:39.868537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.883735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.883754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.892514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.892531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.902469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.902487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.911677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.911695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.919659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.919676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.934259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.934277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.947066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.947084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.961817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.961835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.972555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.972573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.981318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.981336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:39.996326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:39.996348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.012113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.012134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.025126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.025147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.032878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.032898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.048768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.048789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.066912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.066934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.080430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.080450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.095766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.095785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.111928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.111948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.124909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.124928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.850 [2024-07-25 14:42:40.141915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.850 [2024-07-25 14:42:40.141934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.157305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.157325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.167928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.167947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.176909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.176926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.192666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.192685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.210339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.210357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.224233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.224252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.238173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.238193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.252248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.252267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.262928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.262951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.277469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.110 [2024-07-25 14:42:40.277488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.110 [2024-07-25 14:42:40.291775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.291793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.302810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.302829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.312182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.312200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.320870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.320888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.330582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.330600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.344530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.344549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.352142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.352160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.360084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.360101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.374686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.374705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.111 [2024-07-25 14:42:40.387371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.111 [2024-07-25 14:42:40.387390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.403900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.403920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.419337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.419355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.429585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.429603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.444318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.444336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.453104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.453121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.463795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.463812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.473618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.473636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.488303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.488325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.503782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.503800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.519210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.519228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.530110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.530128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.546426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.546443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.562366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.562384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.577256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.577274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.591695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.591714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.606634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.606652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.619101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.619119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.629538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.629557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.638299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.638316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.647639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.647656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.371 [2024-07-25 14:42:40.660911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.371 [2024-07-25 14:42:40.660929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.672561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.672579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.687599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.687617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.698142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.698160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.710181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.710198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.726779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.726797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.743354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.743378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.759179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.759197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.773770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.773788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.786877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.786894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.802585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.802603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.820511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.820531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.835555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.835574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.851457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.851476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.861586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.861604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.877724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.877743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.887208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.887226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.896003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.896021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.904827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.904844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.632 [2024-07-25 14:42:40.921169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.632 [2024-07-25 14:42:40.921188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.935936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.935956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.948337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.948356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.955159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.955177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.968326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.968344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.982018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.982036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:40.995895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:40.995914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.009238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.009255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.022797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.022816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.036297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.036316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.049224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.049243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.056407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.056424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.073358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.073377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.088869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.088887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.099797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.099815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.114362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.114380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.128627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.128646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.145063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.145082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.161500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.161518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.893 [2024-07-25 14:42:41.174104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.893 [2024-07-25 14:42:41.174123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.187657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.187676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.201714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.201733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.215652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.215671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.229621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.229639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.244740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.244758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.255151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.255168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.269290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.269318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.281262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.281280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.290418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.290436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.299630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.299648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.314541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.314559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.325128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.325146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.339867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.339885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.350669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.350687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.364901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.364919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.378967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.378986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.387828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.387845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.396892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.396910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.405957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.405976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.421539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.421558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.154 [2024-07-25 14:42:41.436711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.154 [2024-07-25 14:42:41.436730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.452339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.452361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.468176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.468195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.478010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.478028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.487179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.487197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.502164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.502182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.513148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.513168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.527678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.527697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.539170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.539189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.548281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.548299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.563006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.563024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.574027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.574053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.583194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.583212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.599162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.599181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.615326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.615344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.632837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.632856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.647277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.647295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.657448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.657467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.666668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.666686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.683927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.683946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.414 [2024-07-25 14:42:41.696230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.414 [2024-07-25 14:42:41.696249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.712007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.712026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.722101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.722119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.731430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.731449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.746038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.746062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.755066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.755084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.770688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.770707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.784918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.784938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.797996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.798015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.806778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.806795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.822630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.822648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.837536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.837554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.852258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.852276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.862795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.862813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.675 [2024-07-25 14:42:41.872564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.675 [2024-07-25 14:42:41.872581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.885898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.885915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.896309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.896326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 00:15:21.676 Latency(us) 00:15:21.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.676 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:21.676 Nvme1n1 : 5.00 15267.34 119.28 0.00 0.00 8377.77 2137.04 33280.89 00:15:21.676 =================================================================================================================== 00:15:21.676 Total : 15267.34 119.28 0.00 0.00 8377.77 2137.04 33280.89 00:15:21.676 [2024-07-25 14:42:41.908346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.908360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.920372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.920391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.932406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.932424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.944430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.944443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.676 [2024-07-25 14:42:41.956458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.676 [2024-07-25 14:42:41.956473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:41.968495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:41.968510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:41.980523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:41.980537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:41.992555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:41.992568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.004586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.004597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.016618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.016626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.028654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.028665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.040683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.040692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.052716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.052725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.064748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.064760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 [2024-07-25 14:42:42.076776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.936 [2024-07-25 14:42:42.076784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2302264) - No such process 00:15:21.936 14:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2302264 00:15:21.936 14:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.936 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.936 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.936 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.937 delay0 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.937 14:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:21.937 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.937 [2024-07-25 14:42:42.206587] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:28.515 Initializing NVMe Controllers 00:15:28.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.515 Initialization complete. Launching workers. 00:15:28.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:15:28.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 45 00:15:28.515 success 191, unsuccess 175, failed 0 00:15:28.515 14:42:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:28.515 14:42:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:28.515 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.515 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.516 rmmod nvme_tcp 00:15:28.516 rmmod nvme_fabrics 00:15:28.516 rmmod nvme_keyring 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2300389 ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2300389 ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2300389' 00:15:28.516 killing process with pid 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2300389 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.516 14:42:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.427 14:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.427 00:15:30.427 real 0m31.359s 00:15:30.427 user 0m42.480s 00:15:30.427 sys 0m10.446s 00:15:30.427 14:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.427 14:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:30.427 ************************************ 00:15:30.427 END TEST nvmf_zcopy 00:15:30.427 ************************************ 00:15:30.690 14:42:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.690 14:42:50 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:30.690 14:42:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.690 14:42:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.690 14:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.690 ************************************ 00:15:30.690 START TEST nvmf_nmic 00:15:30.690 ************************************ 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:30.690 * Looking for test storage... 00:15:30.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.690 14:42:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.691 14:42:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:36.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:36.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:36.033 Found net devices under 0000:86:00.0: cvl_0_0 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:36.033 Found net devices under 0000:86:00.1: cvl_0_1 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.033 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:15:36.294 00:15:36.294 --- 10.0.0.2 ping statistics --- 00:15:36.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.294 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:15:36.294 00:15:36.294 --- 10.0.0.1 ping statistics --- 00:15:36.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.294 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2307622 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2307622 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2307622 ']' 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.294 14:42:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.294 [2024-07-25 14:42:56.493047] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:15:36.294 [2024-07-25 14:42:56.493091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.294 [2024-07-25 14:42:56.553448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.554 [2024-07-25 14:42:56.635299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.554 [2024-07-25 14:42:56.635337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.554 [2024-07-25 14:42:56.635344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.554 [2024-07-25 14:42:56.635351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.554 [2024-07-25 14:42:56.635356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.554 [2024-07-25 14:42:56.635395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.554 [2024-07-25 14:42:56.635415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.554 [2024-07-25 14:42:56.635507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.554 [2024-07-25 14:42:56.635509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 [2024-07-25 14:42:57.345902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 Malloc0 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 [2024-07-25 14:42:57.397859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:37.124 test case1: single bdev can't be used in multiple subsystems 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.124 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.384 [2024-07-25 14:42:57.421768] bdev.c:8075:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:37.384 [2024-07-25 14:42:57.421789] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:37.384 [2024-07-25 14:42:57.421796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.384 request: 00:15:37.384 { 00:15:37.384 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:37.384 "namespace": { 00:15:37.384 "bdev_name": "Malloc0", 00:15:37.384 "no_auto_visible": false 00:15:37.384 }, 00:15:37.384 "method": "nvmf_subsystem_add_ns", 00:15:37.384 "req_id": 1 00:15:37.384 } 00:15:37.384 Got JSON-RPC error response 00:15:37.384 response: 00:15:37.384 { 00:15:37.384 "code": -32602, 00:15:37.384 "message": "Invalid parameters" 00:15:37.384 } 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:37.384 Adding namespace failed - expected result. 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:37.384 test case2: host connect to nvmf target in multiple paths 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.384 [2024-07-25 14:42:57.433910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.384 14:42:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.767 14:42:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:39.707 14:42:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:39.707 14:42:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:39.707 14:42:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.707 14:42:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:39.707 14:42:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:41.619 14:43:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:41.619 [global] 00:15:41.619 thread=1 00:15:41.619 invalidate=1 00:15:41.619 rw=write 00:15:41.619 time_based=1 00:15:41.619 runtime=1 00:15:41.619 ioengine=libaio 00:15:41.619 direct=1 00:15:41.619 bs=4096 00:15:41.619 iodepth=1 00:15:41.619 norandommap=0 00:15:41.619 numjobs=1 00:15:41.619 00:15:41.619 verify_dump=1 00:15:41.619 verify_backlog=512 00:15:41.619 verify_state_save=0 00:15:41.619 do_verify=1 00:15:41.619 verify=crc32c-intel 00:15:41.619 [job0] 00:15:41.619 filename=/dev/nvme0n1 00:15:41.619 Could not set queue depth (nvme0n1) 00:15:41.878 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.879 fio-3.35 00:15:41.879 Starting 1 thread 00:15:43.264 00:15:43.264 job0: (groupid=0, jobs=1): err= 0: pid=2308695: Thu Jul 25 14:43:03 2024 00:15:43.264 read: IOPS=514, BW=2058KiB/s (2107kB/s)(2072KiB/1007msec) 00:15:43.264 slat (nsec): min=4469, max=45824, avg=16681.60, stdev=7703.99 00:15:43.264 clat (usec): min=370, max=42971, avg=1164.68, stdev=4413.74 00:15:43.264 lat (usec): min=377, max=42994, avg=1181.36, stdev=4414.22 00:15:43.264 clat percentiles (usec): 00:15:43.264 | 1.00th=[ 465], 5.00th=[ 523], 10.00th=[ 537], 20.00th=[ 611], 00:15:43.264 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 701], 00:15:43.264 | 70.00th=[ 725], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 848], 00:15:43.264 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:15:43.264 | 99.99th=[42730] 00:15:43.264 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:15:43.264 slat (nsec): min=9030, max=57661, avg=12007.25, stdev=5786.35 00:15:43.264 clat (usec): min=236, max=1679, avg=367.62, stdev=174.72 00:15:43.264 lat (usec): min=245, max=1689, avg=379.63, stdev=178.80 00:15:43.264 clat percentiles (usec): 00:15:43.264 | 1.00th=[ 241], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 247], 00:15:43.264 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 306], 60.00th=[ 314], 00:15:43.264 | 70.00th=[ 363], 80.00th=[ 457], 90.00th=[ 701], 95.00th=[ 766], 00:15:43.264 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 1270], 99.95th=[ 1680], 00:15:43.264 | 99.99th=[ 1680] 00:15:43.264 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:15:43.264 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:15:43.264 lat (usec) : 250=16.15%, 500=40.79%, 750=29.90%, 1000=12.52% 00:15:43.264 lat (msec) : 2=0.26%, 50=0.39% 00:15:43.264 cpu : usr=1.19%, sys=2.09%, ctx=1542, majf=0, minf=2 00:15:43.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:43.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.264 issued rwts: total=518,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:43.264 00:15:43.264 Run status group 0 (all jobs): 00:15:43.264 READ: bw=2058KiB/s (2107kB/s), 2058KiB/s-2058KiB/s (2107kB/s-2107kB/s), io=2072KiB (2122kB), run=1007-1007msec 00:15:43.264 WRITE: bw=4068KiB/s (4165kB/s), 4068KiB/s-4068KiB/s (4165kB/s-4165kB/s), io=4096KiB (4194kB), run=1007-1007msec 00:15:43.264 00:15:43.264 Disk stats (read/write): 00:15:43.264 nvme0n1: ios=565/1024, merge=0/0, ticks=545/367, in_queue=912, util=93.89% 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.264 rmmod nvme_tcp 00:15:43.264 rmmod nvme_fabrics 00:15:43.264 rmmod nvme_keyring 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2307622 ']' 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2307622 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2307622 ']' 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2307622 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.264 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307622 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307622' 00:15:43.524 killing process with pid 2307622 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2307622 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2307622 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.524 14:43:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.067 14:43:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.067 00:15:46.067 real 0m15.065s 00:15:46.067 user 0m35.159s 00:15:46.067 sys 0m4.851s 00:15:46.067 14:43:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.067 14:43:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:46.067 ************************************ 00:15:46.067 END TEST nvmf_nmic 00:15:46.067 ************************************ 00:15:46.067 14:43:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.067 14:43:05 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:46.067 14:43:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.067 14:43:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.067 14:43:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.067 ************************************ 00:15:46.067 START TEST nvmf_fio_target 00:15:46.067 ************************************ 00:15:46.067 14:43:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:46.067 * Looking for test storage... 00:15:46.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.067 14:43:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.067 14:43:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.067 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.068 14:43:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.346 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:51.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:51.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:51.347 Found net devices under 0000:86:00.0: cvl_0_0 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:51.347 Found net devices under 0000:86:00.1: cvl_0_1 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.347 14:43:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:15:51.347 00:15:51.347 --- 10.0.0.2 ping statistics --- 00:15:51.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.347 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:15:51.347 00:15:51.347 --- 10.0.0.1 ping statistics --- 00:15:51.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.347 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2312353 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2312353 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2312353 ']' 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.347 14:43:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 [2024-07-25 14:43:11.286627] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:15:51.347 [2024-07-25 14:43:11.286671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.347 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.348 [2024-07-25 14:43:11.345544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.348 [2024-07-25 14:43:11.427126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.348 [2024-07-25 14:43:11.427162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.348 [2024-07-25 14:43:11.427169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.348 [2024-07-25 14:43:11.427175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.348 [2024-07-25 14:43:11.427180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.348 [2024-07-25 14:43:11.427223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.348 [2024-07-25 14:43:11.427242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.348 [2024-07-25 14:43:11.427332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.348 [2024-07-25 14:43:11.427333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.916 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.175 [2024-07-25 14:43:12.277546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.175 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.434 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:52.434 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.434 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:52.434 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.693 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:52.693 14:43:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.953 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:52.953 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:53.212 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.212 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:53.212 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.511 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:53.511 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.771 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:53.771 14:43:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:54.031 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:54.031 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:54.031 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.291 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:54.291 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.551 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.551 [2024-07-25 14:43:14.775507] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.551 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:54.811 14:43:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:55.070 14:43:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:56.450 14:43:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:58.418 14:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:58.418 [global] 00:15:58.418 thread=1 00:15:58.418 invalidate=1 00:15:58.418 rw=write 00:15:58.418 time_based=1 00:15:58.418 runtime=1 00:15:58.418 ioengine=libaio 00:15:58.418 direct=1 00:15:58.418 bs=4096 00:15:58.418 iodepth=1 00:15:58.418 norandommap=0 00:15:58.418 numjobs=1 00:15:58.418 00:15:58.418 verify_dump=1 00:15:58.418 verify_backlog=512 00:15:58.418 verify_state_save=0 00:15:58.418 do_verify=1 00:15:58.418 verify=crc32c-intel 00:15:58.418 [job0] 00:15:58.418 filename=/dev/nvme0n1 00:15:58.418 [job1] 00:15:58.418 filename=/dev/nvme0n2 00:15:58.418 [job2] 00:15:58.418 filename=/dev/nvme0n3 00:15:58.418 [job3] 00:15:58.418 filename=/dev/nvme0n4 00:15:58.418 Could not set queue depth (nvme0n1) 00:15:58.418 Could not set queue depth (nvme0n2) 00:15:58.418 Could not set queue depth (nvme0n3) 00:15:58.418 Could not set queue depth (nvme0n4) 00:15:58.677 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.677 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.677 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.677 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.677 fio-3.35 00:15:58.677 Starting 4 threads 00:16:00.059 00:16:00.059 job0: (groupid=0, jobs=1): err= 0: pid=2313794: Thu Jul 25 14:43:19 2024 00:16:00.059 read: IOPS=19, BW=77.6KiB/s (79.5kB/s)(80.0KiB/1031msec) 00:16:00.059 slat (nsec): min=9454, max=23372, avg=21901.45, stdev=3020.62 00:16:00.059 clat (usec): min=41600, max=43069, avg=42070.82, stdev=330.83 00:16:00.059 lat (usec): min=41622, max=43092, avg=42092.72, stdev=329.21 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:00.059 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:00.059 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:00.059 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:00.059 | 99.99th=[43254] 00:16:00.059 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:16:00.059 slat (usec): min=9, max=670, avg=14.68, stdev=48.42 00:16:00.059 clat (usec): min=65, max=1488, avg=352.04, stdev=166.13 00:16:00.059 lat (usec): min=244, max=2054, avg=366.72, stdev=182.19 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[ 237], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 247], 00:16:00.059 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 314], 00:16:00.059 | 70.00th=[ 367], 80.00th=[ 433], 90.00th=[ 553], 95.00th=[ 676], 00:16:00.059 | 99.00th=[ 1004], 99.50th=[ 1369], 99.90th=[ 1483], 99.95th=[ 1483], 00:16:00.059 | 99.99th=[ 1483] 00:16:00.059 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:16:00.059 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:00.059 lat (usec) : 100=0.19%, 250=23.12%, 500=61.09%, 750=9.21%, 1000=1.50% 00:16:00.059 lat (msec) : 2=1.13%, 50=3.76% 00:16:00.059 cpu : usr=0.29%, sys=0.49%, ctx=535, majf=0, minf=1 00:16:00.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.059 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.059 job1: (groupid=0, jobs=1): err= 0: pid=2313795: Thu Jul 25 14:43:19 2024 00:16:00.059 read: IOPS=850, BW=3401KiB/s (3482kB/s)(3404KiB/1001msec) 00:16:00.059 slat (nsec): min=3588, max=49451, avg=10379.71, stdev=7317.14 00:16:00.059 clat (usec): min=396, max=1314, avg=751.71, stdev=97.75 00:16:00.059 lat (usec): min=404, max=1319, avg=762.09, stdev=96.45 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[ 545], 5.00th=[ 603], 10.00th=[ 635], 20.00th=[ 676], 00:16:00.059 | 30.00th=[ 693], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 775], 00:16:00.059 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 865], 95.00th=[ 914], 00:16:00.059 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1319], 99.95th=[ 1319], 00:16:00.059 | 99.99th=[ 1319] 00:16:00.059 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:00.059 slat (nsec): min=6480, max=37100, avg=10957.88, stdev=1956.37 00:16:00.059 clat (usec): min=236, max=2175, avg=327.42, stdev=162.55 00:16:00.059 lat (usec): min=248, max=2182, avg=338.38, stdev=162.77 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 258], 00:16:00.059 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 281], 00:16:00.059 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 478], 95.00th=[ 611], 00:16:00.059 | 99.00th=[ 1172], 99.50th=[ 1352], 99.90th=[ 1778], 99.95th=[ 2180], 00:16:00.059 | 99.99th=[ 2180] 00:16:00.059 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=2 00:16:00.059 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:00.059 lat (usec) : 250=7.47%, 500=42.08%, 750=25.81%, 1000=22.93% 00:16:00.059 lat (msec) : 2=1.65%, 4=0.05% 00:16:00.059 cpu : usr=1.40%, sys=2.40%, ctx=1875, majf=0, minf=1 00:16:00.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.059 issued rwts: total=851,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.059 job2: (groupid=0, jobs=1): err= 0: pid=2313796: Thu Jul 25 14:43:19 2024 00:16:00.059 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:16:00.059 slat (nsec): min=9315, max=24077, avg=22595.45, stdev=3184.03 00:16:00.059 clat (usec): min=40906, max=42989, avg=41968.13, stdev=358.03 00:16:00.059 lat (usec): min=40915, max=43013, avg=41990.72, stdev=360.26 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:00.059 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:00.059 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:00.059 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:00.059 | 99.99th=[42730] 00:16:00.059 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:16:00.059 slat (nsec): min=9485, max=38484, avg=12404.06, stdev=2059.30 00:16:00.059 clat (usec): min=242, max=854, avg=302.99, stdev=101.77 00:16:00.059 lat (usec): min=254, max=893, avg=315.40, stdev=102.25 00:16:00.059 clat percentiles (usec): 00:16:00.059 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 255], 00:16:00.059 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:16:00.060 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 420], 95.00th=[ 545], 00:16:00.060 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 857], 99.95th=[ 857], 00:16:00.060 | 99.99th=[ 857] 00:16:00.060 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:16:00.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:00.060 lat (usec) : 250=10.34%, 500=78.57%, 750=7.14%, 1000=0.19% 00:16:00.060 lat (msec) : 50=3.76% 00:16:00.060 cpu : usr=0.50%, sys=0.50%, ctx=534, majf=0, minf=1 00:16:00.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.060 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.060 job3: (groupid=0, jobs=1): err= 0: pid=2313797: Thu Jul 25 14:43:19 2024 00:16:00.060 read: IOPS=907, BW=3632KiB/s (3719kB/s)(3708KiB/1021msec) 00:16:00.060 slat (usec): min=6, max=322, avg= 8.98, stdev=19.93 00:16:00.060 clat (usec): min=86, max=42047, avg=741.22, stdev=3323.85 00:16:00.060 lat (usec): min=350, max=42068, avg=750.20, stdev=3324.97 00:16:00.060 clat percentiles (usec): 00:16:00.060 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 388], 20.00th=[ 408], 00:16:00.060 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 469], 00:16:00.060 | 70.00th=[ 510], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 586], 00:16:00.060 | 99.00th=[ 791], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:00.060 | 99.99th=[42206] 00:16:00.060 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:16:00.060 slat (nsec): min=9539, max=94836, avg=11019.07, stdev=3066.52 00:16:00.060 clat (usec): min=234, max=800, avg=301.71, stdev=91.77 00:16:00.060 lat (usec): min=247, max=895, avg=312.73, stdev=92.68 00:16:00.060 clat percentiles (usec): 00:16:00.060 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 247], 00:16:00.060 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 285], 00:16:00.060 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 383], 95.00th=[ 498], 00:16:00.060 | 99.00th=[ 676], 99.50th=[ 676], 99.90th=[ 791], 99.95th=[ 799], 00:16:00.060 | 99.99th=[ 799] 00:16:00.060 bw ( KiB/s): min= 1632, max= 6560, per=34.37%, avg=4096.00, stdev=3484.62, samples=2 00:16:00.060 iops : min= 408, max= 1640, avg=1024.00, stdev=871.16, samples=2 00:16:00.060 lat (usec) : 100=0.05%, 250=12.97%, 500=69.25%, 750=17.07%, 1000=0.26% 00:16:00.060 lat (msec) : 2=0.10%, 50=0.31% 00:16:00.060 cpu : usr=0.78%, sys=2.06%, ctx=1952, majf=0, minf=2 00:16:00.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.060 issued rwts: total=927,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.060 00:16:00.060 Run status group 0 (all jobs): 00:16:00.060 READ: bw=7053KiB/s (7223kB/s), 77.6KiB/s-3632KiB/s (79.5kB/s-3719kB/s), io=7272KiB (7447kB), run=1001-1031msec 00:16:00.060 WRITE: bw=11.6MiB/s (12.2MB/s), 1986KiB/s-4092KiB/s (2034kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:16:00.060 00:16:00.060 Disk stats (read/write): 00:16:00.060 nvme0n1: ios=41/512, merge=0/0, ticks=1601/181, in_queue=1782, util=98.50% 00:16:00.060 nvme0n2: ios=687/1024, merge=0/0, ticks=557/323, in_queue=880, util=89.02% 00:16:00.060 nvme0n3: ios=39/512, merge=0/0, ticks=1641/150, in_queue=1791, util=98.85% 00:16:00.060 nvme0n4: ios=978/1024, merge=0/0, ticks=701/307, in_queue=1008, util=98.11% 00:16:00.060 14:43:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:00.060 [global] 00:16:00.060 thread=1 00:16:00.060 invalidate=1 00:16:00.060 rw=randwrite 00:16:00.060 time_based=1 00:16:00.060 runtime=1 00:16:00.060 ioengine=libaio 00:16:00.060 direct=1 00:16:00.060 bs=4096 00:16:00.060 iodepth=1 00:16:00.060 norandommap=0 00:16:00.060 numjobs=1 00:16:00.060 00:16:00.060 verify_dump=1 00:16:00.060 verify_backlog=512 00:16:00.060 verify_state_save=0 00:16:00.060 do_verify=1 00:16:00.060 verify=crc32c-intel 00:16:00.060 [job0] 00:16:00.060 filename=/dev/nvme0n1 00:16:00.060 [job1] 00:16:00.060 filename=/dev/nvme0n2 00:16:00.060 [job2] 00:16:00.060 filename=/dev/nvme0n3 00:16:00.060 [job3] 00:16:00.060 filename=/dev/nvme0n4 00:16:00.060 Could not set queue depth (nvme0n1) 00:16:00.060 Could not set queue depth (nvme0n2) 00:16:00.060 Could not set queue depth (nvme0n3) 00:16:00.060 Could not set queue depth (nvme0n4) 00:16:00.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.060 fio-3.35 00:16:00.060 Starting 4 threads 00:16:01.440 00:16:01.440 job0: (groupid=0, jobs=1): err= 0: pid=2314169: Thu Jul 25 14:43:21 2024 00:16:01.440 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:01.440 slat (nsec): min=7092, max=22415, avg=7895.33, stdev=1111.31 00:16:01.440 clat (usec): min=336, max=41092, avg=582.99, stdev=1792.13 00:16:01.440 lat (usec): min=344, max=41114, avg=590.89, stdev=1792.53 00:16:01.440 clat percentiles (usec): 00:16:01.440 | 1.00th=[ 355], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 453], 00:16:01.440 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 506], 00:16:01.440 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:16:01.440 | 99.00th=[ 930], 99.50th=[ 1631], 99.90th=[40633], 99.95th=[41157], 00:16:01.440 | 99.99th=[41157] 00:16:01.440 write: IOPS=1260, BW=5043KiB/s (5164kB/s)(5048KiB/1001msec); 0 zone resets 00:16:01.440 slat (nsec): min=9802, max=37582, avg=11771.09, stdev=1549.57 00:16:01.440 clat (usec): min=242, max=795, avg=294.49, stdev=87.72 00:16:01.440 lat (usec): min=253, max=826, avg=306.27, stdev=88.05 00:16:01.440 clat percentiles (usec): 00:16:01.440 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 253], 00:16:01.440 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:16:01.440 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 375], 95.00th=[ 490], 00:16:01.440 | 99.00th=[ 676], 99.50th=[ 742], 99.90th=[ 758], 99.95th=[ 799], 00:16:01.440 | 99.99th=[ 799] 00:16:01.440 bw ( KiB/s): min= 4096, max= 4096, per=28.75%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.440 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.440 lat (usec) : 250=3.63%, 500=72.88%, 750=22.48%, 1000=0.61% 00:16:01.440 lat (msec) : 2=0.17%, 4=0.13%, 50=0.09% 00:16:01.440 cpu : usr=2.10%, sys=1.60%, ctx=2291, majf=0, minf=2 00:16:01.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.440 issued rwts: total=1024,1262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.441 job1: (groupid=0, jobs=1): err= 0: pid=2314170: Thu Jul 25 14:43:21 2024 00:16:01.441 read: IOPS=19, BW=77.3KiB/s (79.1kB/s)(80.0KiB/1035msec) 00:16:01.441 slat (nsec): min=9383, max=23146, avg=21830.55, stdev=3003.25 00:16:01.441 clat (usec): min=41086, max=43114, avg=41943.40, stdev=371.28 00:16:01.441 lat (usec): min=41109, max=43136, avg=41965.23, stdev=372.39 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:01.441 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:01.441 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:01.441 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:01.441 | 99.99th=[43254] 00:16:01.441 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:01.441 slat (nsec): min=8670, max=74054, avg=10318.34, stdev=3420.16 00:16:01.441 clat (usec): min=242, max=1183, avg=369.72, stdev=130.74 00:16:01.441 lat (usec): min=252, max=1193, avg=380.04, stdev=131.68 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 281], 00:16:01.441 | 30.00th=[ 289], 40.00th=[ 338], 50.00th=[ 371], 60.00th=[ 375], 00:16:01.441 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 562], 95.00th=[ 668], 00:16:01.441 | 99.00th=[ 881], 99.50th=[ 947], 99.90th=[ 1188], 99.95th=[ 1188], 00:16:01.441 | 99.99th=[ 1188] 00:16:01.441 bw ( KiB/s): min= 4096, max= 4096, per=28.75%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.441 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.441 lat (usec) : 250=3.57%, 500=82.71%, 750=7.14%, 1000=2.44% 00:16:01.441 lat (msec) : 2=0.38%, 50=3.76% 00:16:01.441 cpu : usr=0.58%, sys=0.19%, ctx=533, majf=0, minf=1 00:16:01.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.441 job2: (groupid=0, jobs=1): err= 0: pid=2314171: Thu Jul 25 14:43:21 2024 00:16:01.441 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:16:01.441 slat (nsec): min=9309, max=23916, avg=22333.32, stdev=3260.22 00:16:01.441 clat (usec): min=40944, max=42122, avg=41845.55, stdev=347.93 00:16:01.441 lat (usec): min=40968, max=42145, avg=41867.88, stdev=348.79 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:01.441 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:01.441 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:01.441 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:01.441 | 99.99th=[42206] 00:16:01.441 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:16:01.441 slat (nsec): min=9201, max=37373, avg=11410.99, stdev=2376.42 00:16:01.441 clat (usec): min=241, max=1203, avg=393.27, stdev=139.32 00:16:01.441 lat (usec): min=251, max=1228, avg=404.68, stdev=139.64 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 281], 00:16:01.441 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:16:01.441 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 578], 95.00th=[ 668], 00:16:01.441 | 99.00th=[ 857], 99.50th=[ 1074], 99.90th=[ 1205], 99.95th=[ 1205], 00:16:01.441 | 99.99th=[ 1205] 00:16:01.441 bw ( KiB/s): min= 4096, max= 4096, per=28.75%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.441 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.441 lat (usec) : 250=9.79%, 500=71.00%, 750=12.81%, 1000=2.26% 00:16:01.441 lat (msec) : 2=0.56%, 50=3.58% 00:16:01.441 cpu : usr=0.20%, sys=0.70%, ctx=533, majf=0, minf=1 00:16:01.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.441 job3: (groupid=0, jobs=1): err= 0: pid=2314172: Thu Jul 25 14:43:21 2024 00:16:01.441 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:01.441 slat (nsec): min=7408, max=37940, avg=8422.07, stdev=1564.52 00:16:01.441 clat (usec): min=467, max=1079, avg=542.11, stdev=51.27 00:16:01.441 lat (usec): min=476, max=1087, avg=550.53, stdev=51.28 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[ 482], 5.00th=[ 502], 10.00th=[ 506], 20.00th=[ 515], 00:16:01.441 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 537], 00:16:01.441 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 603], 00:16:01.441 | 99.00th=[ 750], 99.50th=[ 914], 99.90th=[ 1074], 99.95th=[ 1074], 00:16:01.441 | 99.99th=[ 1074] 00:16:01.441 write: IOPS=1398, BW=5594KiB/s (5729kB/s)(5600KiB/1001msec); 0 zone resets 00:16:01.441 slat (nsec): min=10751, max=44358, avg=12016.06, stdev=1889.44 00:16:01.441 clat (usec): min=238, max=820, avg=292.28, stdev=71.38 00:16:01.441 lat (usec): min=250, max=857, avg=304.30, stdev=71.84 00:16:01.441 clat percentiles (usec): 00:16:01.441 | 1.00th=[ 241], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 249], 00:16:01.441 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:16:01.441 | 70.00th=[ 289], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 420], 00:16:01.441 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 824], 00:16:01.441 | 99.99th=[ 824] 00:16:01.441 bw ( KiB/s): min= 5144, max= 5144, per=36.11%, avg=5144.00, stdev= 0.00, samples=1 00:16:01.441 iops : min= 1286, max= 1286, avg=1286.00, stdev= 0.00, samples=1 00:16:01.441 lat (usec) : 250=13.04%, 500=45.30%, 750=41.17%, 1000=0.37% 00:16:01.441 lat (msec) : 2=0.12% 00:16:01.441 cpu : usr=2.30%, sys=3.80%, ctx=2425, majf=0, minf=1 00:16:01.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.441 issued rwts: total=1024,1400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.441 00:16:01.441 Run status group 0 (all jobs): 00:16:01.441 READ: bw=8066KiB/s (8259kB/s), 75.5KiB/s-4092KiB/s (77.3kB/s-4190kB/s), io=8348KiB (8548kB), run=1001-1035msec 00:16:01.441 WRITE: bw=13.9MiB/s (14.6MB/s), 1979KiB/s-5594KiB/s (2026kB/s-5729kB/s), io=14.4MiB (15.1MB), run=1001-1035msec 00:16:01.441 00:16:01.441 Disk stats (read/write): 00:16:01.441 nvme0n1: ios=928/1024, merge=0/0, ticks=1072/294, in_queue=1366, util=90.17% 00:16:01.441 nvme0n2: ios=65/512, merge=0/0, ticks=738/189, in_queue=927, util=93.81% 00:16:01.441 nvme0n3: ios=41/512, merge=0/0, ticks=1557/202, in_queue=1759, util=97.81% 00:16:01.441 nvme0n4: ios=974/1024, merge=0/0, ticks=1471/298, in_queue=1769, util=96.75% 00:16:01.441 14:43:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:01.441 [global] 00:16:01.441 thread=1 00:16:01.441 invalidate=1 00:16:01.441 rw=write 00:16:01.441 time_based=1 00:16:01.441 runtime=1 00:16:01.441 ioengine=libaio 00:16:01.441 direct=1 00:16:01.441 bs=4096 00:16:01.441 iodepth=128 00:16:01.441 norandommap=0 00:16:01.441 numjobs=1 00:16:01.441 00:16:01.441 verify_dump=1 00:16:01.441 verify_backlog=512 00:16:01.441 verify_state_save=0 00:16:01.441 do_verify=1 00:16:01.441 verify=crc32c-intel 00:16:01.441 [job0] 00:16:01.441 filename=/dev/nvme0n1 00:16:01.441 [job1] 00:16:01.441 filename=/dev/nvme0n2 00:16:01.441 [job2] 00:16:01.441 filename=/dev/nvme0n3 00:16:01.441 [job3] 00:16:01.441 filename=/dev/nvme0n4 00:16:01.441 Could not set queue depth (nvme0n1) 00:16:01.441 Could not set queue depth (nvme0n2) 00:16:01.441 Could not set queue depth (nvme0n3) 00:16:01.441 Could not set queue depth (nvme0n4) 00:16:01.700 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.700 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.701 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.701 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.701 fio-3.35 00:16:01.701 Starting 4 threads 00:16:03.081 00:16:03.081 job0: (groupid=0, jobs=1): err= 0: pid=2314545: Thu Jul 25 14:43:23 2024 00:16:03.081 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1003msec) 00:16:03.081 slat (nsec): min=1537, max=13725k, avg=124236.11, stdev=696994.11 00:16:03.081 clat (usec): min=1288, max=31273, avg=16082.25, stdev=4588.30 00:16:03.081 lat (usec): min=6778, max=34282, avg=16206.48, stdev=4620.80 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[12256], 00:16:03.081 | 30.00th=[13566], 40.00th=[14484], 50.00th=[15795], 60.00th=[17171], 00:16:03.081 | 70.00th=[18482], 80.00th=[19792], 90.00th=[22152], 95.00th=[24249], 00:16:03.081 | 99.00th=[26608], 99.50th=[28443], 99.90th=[31327], 99.95th=[31327], 00:16:03.081 | 99.99th=[31327] 00:16:03.081 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:16:03.081 slat (usec): min=2, max=42043, avg=198.78, stdev=1209.55 00:16:03.081 clat (usec): min=6593, max=71722, avg=22176.61, stdev=10227.49 00:16:03.081 lat (usec): min=6598, max=99612, avg=22375.38, stdev=10386.65 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[13304], 5.00th=[13566], 10.00th=[14091], 20.00th=[15533], 00:16:03.081 | 30.00th=[16909], 40.00th=[19006], 50.00th=[20055], 60.00th=[21365], 00:16:03.081 | 70.00th=[23200], 80.00th=[24773], 90.00th=[28967], 95.00th=[43779], 00:16:03.081 | 99.00th=[69731], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:16:03.081 | 99.99th=[71828] 00:16:03.081 bw ( KiB/s): min=12288, max=12288, per=21.63%, avg=12288.00, stdev= 0.00, samples=2 00:16:03.081 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:03.081 lat (msec) : 2=0.02%, 10=4.21%, 20=62.35%, 50=31.46%, 100=1.97% 00:16:03.081 cpu : usr=1.40%, sys=3.89%, ctx=508, majf=0, minf=1 00:16:03.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:03.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.081 issued rwts: total=3015,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.081 job1: (groupid=0, jobs=1): err= 0: pid=2314546: Thu Jul 25 14:43:23 2024 00:16:03.081 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:03.081 slat (nsec): min=1655, max=12886k, avg=125755.14, stdev=774791.92 00:16:03.081 clat (usec): min=8506, max=67916, avg=16279.68, stdev=7344.10 00:16:03.081 lat (usec): min=8508, max=67929, avg=16405.43, stdev=7412.81 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11600], 00:16:03.081 | 30.00th=[12649], 40.00th=[13566], 50.00th=[14877], 60.00th=[15664], 00:16:03.081 | 70.00th=[17695], 80.00th=[20317], 90.00th=[21890], 95.00th=[25297], 00:16:03.081 | 99.00th=[58983], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:16:03.081 | 99.99th=[67634] 00:16:03.081 write: IOPS=3897, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1003msec); 0 zone resets 00:16:03.081 slat (usec): min=2, max=33523, avg=134.81, stdev=792.41 00:16:03.081 clat (usec): min=1065, max=67907, avg=16255.60, stdev=6630.68 00:16:03.081 lat (usec): min=1473, max=67912, avg=16390.41, stdev=6679.88 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 6128], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[11338], 00:16:03.081 | 30.00th=[12518], 40.00th=[13566], 50.00th=[15139], 60.00th=[16581], 00:16:03.081 | 70.00th=[19006], 80.00th=[21103], 90.00th=[23462], 95.00th=[24773], 00:16:03.081 | 99.00th=[47449], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:16:03.081 | 99.99th=[67634] 00:16:03.081 bw ( KiB/s): min=12288, max=17960, per=26.62%, avg=15124.00, stdev=4010.71, samples=2 00:16:03.081 iops : min= 3072, max= 4490, avg=3781.00, stdev=1002.68, samples=2 00:16:03.081 lat (msec) : 2=0.05%, 4=0.20%, 10=8.81%, 20=68.36%, 50=21.66% 00:16:03.081 lat (msec) : 100=0.92% 00:16:03.081 cpu : usr=2.10%, sys=3.59%, ctx=537, majf=0, minf=1 00:16:03.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:03.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.081 issued rwts: total=3584,3909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.081 job2: (groupid=0, jobs=1): err= 0: pid=2314547: Thu Jul 25 14:43:23 2024 00:16:03.081 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:16:03.081 slat (nsec): min=1580, max=46042k, avg=111699.63, stdev=935682.79 00:16:03.081 clat (usec): min=3716, max=65875, avg=13922.11, stdev=9107.52 00:16:03.081 lat (usec): min=3725, max=65881, avg=14033.81, stdev=9144.98 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9503], 00:16:03.081 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11076], 60.00th=[11863], 00:16:03.081 | 70.00th=[13435], 80.00th=[15795], 90.00th=[20317], 95.00th=[26084], 00:16:03.081 | 99.00th=[56886], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:16:03.081 | 99.99th=[65799] 00:16:03.081 write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1002msec); 0 zone resets 00:16:03.081 slat (nsec): min=1982, max=41986k, avg=157391.44, stdev=1026580.67 00:16:03.081 clat (usec): min=1317, max=44367, avg=18245.95, stdev=6283.70 00:16:03.081 lat (usec): min=2377, max=77878, avg=18403.34, stdev=6382.09 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 5735], 5.00th=[ 7832], 10.00th=[ 9110], 20.00th=[11338], 00:16:03.081 | 30.00th=[14353], 40.00th=[16319], 50.00th=[19006], 60.00th=[21890], 00:16:03.081 | 70.00th=[23462], 80.00th=[24511], 90.00th=[25560], 95.00th=[25822], 00:16:03.081 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:16:03.081 | 99.99th=[44303] 00:16:03.081 bw ( KiB/s): min=14128, max=14568, per=25.26%, avg=14348.00, stdev=311.13, samples=2 00:16:03.081 iops : min= 3532, max= 3642, avg=3587.00, stdev=77.78, samples=2 00:16:03.081 lat (msec) : 2=0.01%, 4=0.16%, 10=21.04%, 20=48.95%, 50=28.43% 00:16:03.081 lat (msec) : 100=1.40% 00:16:03.081 cpu : usr=2.90%, sys=2.20%, ctx=593, majf=0, minf=1 00:16:03.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:03.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.081 issued rwts: total=3584,3693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.081 job3: (groupid=0, jobs=1): err= 0: pid=2314548: Thu Jul 25 14:43:23 2024 00:16:03.081 read: IOPS=3195, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1004msec) 00:16:03.081 slat (nsec): min=1607, max=18277k, avg=112687.56, stdev=713842.70 00:16:03.081 clat (usec): min=812, max=81649, avg=13986.47, stdev=7939.19 00:16:03.081 lat (usec): min=822, max=81692, avg=14099.16, stdev=7990.31 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 2769], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 8586], 00:16:03.081 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11994], 60.00th=[12911], 00:16:03.081 | 70.00th=[14484], 80.00th=[16909], 90.00th=[24511], 95.00th=[30802], 00:16:03.081 | 99.00th=[45876], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:16:03.081 | 99.99th=[81265] 00:16:03.081 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:16:03.081 slat (usec): min=2, max=53229, avg=167.39, stdev=1401.32 00:16:03.081 clat (usec): min=3475, max=87688, avg=22968.18, stdev=15803.05 00:16:03.081 lat (usec): min=4584, max=87693, avg=23135.57, stdev=15870.61 00:16:03.081 clat percentiles (usec): 00:16:03.081 | 1.00th=[ 5407], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[10159], 00:16:03.081 | 30.00th=[14222], 40.00th=[18482], 50.00th=[22152], 60.00th=[23462], 00:16:03.081 | 70.00th=[24773], 80.00th=[26346], 90.00th=[38536], 95.00th=[65274], 00:16:03.081 | 99.00th=[83362], 99.50th=[85459], 99.90th=[86508], 99.95th=[87557], 00:16:03.081 | 99.99th=[87557] 00:16:03.081 bw ( KiB/s): min=10640, max=18032, per=25.24%, avg=14336.00, stdev=5226.93, samples=2 00:16:03.081 iops : min= 2660, max= 4508, avg=3584.00, stdev=1306.73, samples=2 00:16:03.081 lat (usec) : 1000=0.03% 00:16:03.081 lat (msec) : 2=0.18%, 4=0.56%, 10=24.43%, 20=39.63%, 50=31.21% 00:16:03.081 lat (msec) : 100=3.96% 00:16:03.081 cpu : usr=1.79%, sys=3.19%, ctx=633, majf=0, minf=1 00:16:03.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:03.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.082 issued rwts: total=3208,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.082 00:16:03.082 Run status group 0 (all jobs): 00:16:03.082 READ: bw=52.1MiB/s (54.6MB/s), 11.7MiB/s-14.0MiB/s (12.3MB/s-14.7MB/s), io=52.3MiB (54.8MB), run=1002-1004msec 00:16:03.082 WRITE: bw=55.5MiB/s (58.2MB/s), 12.0MiB/s-15.2MiB/s (12.5MB/s-16.0MB/s), io=55.7MiB (58.4MB), run=1002-1004msec 00:16:03.082 00:16:03.082 Disk stats (read/write): 00:16:03.082 nvme0n1: ios=2406/2560, merge=0/0, ticks=14025/22818, in_queue=36843, util=90.98% 00:16:03.082 nvme0n2: ios=3091/3072, merge=0/0, ticks=34547/35821, in_queue=70368, util=95.13% 00:16:03.082 nvme0n3: ios=2809/3072, merge=0/0, ticks=40843/51895, in_queue=92738, util=98.44% 00:16:03.082 nvme0n4: ios=3094/3174, merge=0/0, ticks=41270/61595, in_queue=102865, util=99.06% 00:16:03.082 14:43:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:03.082 [global] 00:16:03.082 thread=1 00:16:03.082 invalidate=1 00:16:03.082 rw=randwrite 00:16:03.082 time_based=1 00:16:03.082 runtime=1 00:16:03.082 ioengine=libaio 00:16:03.082 direct=1 00:16:03.082 bs=4096 00:16:03.082 iodepth=128 00:16:03.082 norandommap=0 00:16:03.082 numjobs=1 00:16:03.082 00:16:03.082 verify_dump=1 00:16:03.082 verify_backlog=512 00:16:03.082 verify_state_save=0 00:16:03.082 do_verify=1 00:16:03.082 verify=crc32c-intel 00:16:03.082 [job0] 00:16:03.082 filename=/dev/nvme0n1 00:16:03.082 [job1] 00:16:03.082 filename=/dev/nvme0n2 00:16:03.082 [job2] 00:16:03.082 filename=/dev/nvme0n3 00:16:03.082 [job3] 00:16:03.082 filename=/dev/nvme0n4 00:16:03.082 Could not set queue depth (nvme0n1) 00:16:03.082 Could not set queue depth (nvme0n2) 00:16:03.082 Could not set queue depth (nvme0n3) 00:16:03.082 Could not set queue depth (nvme0n4) 00:16:03.340 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.340 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.340 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.340 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.340 fio-3.35 00:16:03.340 Starting 4 threads 00:16:04.733 00:16:04.733 job0: (groupid=0, jobs=1): err= 0: pid=2314914: Thu Jul 25 14:43:24 2024 00:16:04.733 read: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec) 00:16:04.733 slat (nsec): min=1593, max=57617k, avg=425103.64, stdev=3804822.94 00:16:04.733 clat (msec): min=11, max=159, avg=49.86, stdev=37.79 00:16:04.733 lat (msec): min=11, max=162, avg=50.29, stdev=37.86 00:16:04.733 clat percentiles (msec): 00:16:04.733 | 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 18], 00:16:04.733 | 30.00th=[ 19], 40.00th=[ 25], 50.00th=[ 33], 60.00th=[ 53], 00:16:04.733 | 70.00th=[ 72], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 111], 00:16:04.733 | 99.00th=[ 120], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:16:04.733 | 99.99th=[ 159] 00:16:04.734 write: IOPS=863, BW=3454KiB/s (3537kB/s)(3520KiB/1019msec); 0 zone resets 00:16:04.734 slat (usec): min=2, max=167152, avg=898.57, stdev=8795.07 00:16:04.734 clat (msec): min=7, max=475, avg=117.25, stdev=124.23 00:16:04.734 lat (msec): min=13, max=475, avg=118.15, stdev=124.99 00:16:04.734 clat percentiles (msec): 00:16:04.734 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 19], 00:16:04.734 | 30.00th=[ 22], 40.00th=[ 47], 50.00th=[ 54], 60.00th=[ 72], 00:16:04.734 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 309], 95.00th=[ 363], 00:16:04.734 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:16:04.734 | 99.99th=[ 477] 00:16:04.734 bw ( KiB/s): min= 1328, max= 4688, per=7.10%, avg=3008.00, stdev=2375.88, samples=2 00:16:04.734 iops : min= 332, max= 1172, avg=752.00, stdev=593.97, samples=2 00:16:04.734 lat (msec) : 10=0.07%, 20=28.45%, 50=21.34%, 100=18.32%, 250=20.91% 00:16:04.734 lat (msec) : 500=10.92% 00:16:04.734 cpu : usr=0.39%, sys=0.49%, ctx=106, majf=0, minf=1 00:16:04.734 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:04.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.734 issued rwts: total=512,880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.734 job1: (groupid=0, jobs=1): err= 0: pid=2314917: Thu Jul 25 14:43:24 2024 00:16:04.734 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:16:04.734 slat (nsec): min=1464, max=14581k, avg=130728.31, stdev=762100.43 00:16:04.734 clat (usec): min=6348, max=49359, avg=18045.78, stdev=9425.33 00:16:04.734 lat (usec): min=7036, max=49366, avg=18176.50, stdev=9462.00 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 9241], 00:16:04.734 | 30.00th=[12125], 40.00th=[14222], 50.00th=[15139], 60.00th=[16581], 00:16:04.734 | 70.00th=[21103], 80.00th=[25560], 90.00th=[31589], 95.00th=[37487], 00:16:04.734 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:16:04.734 | 99.99th=[49546] 00:16:04.734 write: IOPS=3213, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1003msec); 0 zone resets 00:16:04.734 slat (usec): min=2, max=29474, avg=181.09, stdev=859.29 00:16:04.734 clat (usec): min=899, max=75697, avg=21716.96, stdev=10653.01 00:16:04.734 lat (usec): min=4310, max=75705, avg=21898.06, stdev=10704.10 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 4490], 5.00th=[10683], 10.00th=[13304], 20.00th=[14877], 00:16:04.734 | 30.00th=[17171], 40.00th=[18482], 50.00th=[20317], 60.00th=[21627], 00:16:04.734 | 70.00th=[22414], 80.00th=[25822], 90.00th=[28967], 95.00th=[38536], 00:16:04.734 | 99.00th=[68682], 99.50th=[70779], 99.90th=[74974], 99.95th=[76022], 00:16:04.734 | 99.99th=[76022] 00:16:04.734 bw ( KiB/s): min=12288, max=12488, per=29.24%, avg=12388.00, stdev=141.42, samples=2 00:16:04.734 iops : min= 3072, max= 3122, avg=3097.00, stdev=35.36, samples=2 00:16:04.734 lat (usec) : 1000=0.02% 00:16:04.734 lat (msec) : 10=12.84%, 20=44.59%, 50=40.54%, 100=2.02% 00:16:04.734 cpu : usr=2.40%, sys=2.30%, ctx=756, majf=0, minf=1 00:16:04.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:04.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.734 issued rwts: total=3072,3223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.734 job2: (groupid=0, jobs=1): err= 0: pid=2314918: Thu Jul 25 14:43:24 2024 00:16:04.734 read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1007msec) 00:16:04.734 slat (nsec): min=1462, max=14007k, avg=151967.26, stdev=821572.78 00:16:04.734 clat (usec): min=3360, max=52592, avg=18031.33, stdev=8810.54 00:16:04.734 lat (usec): min=6112, max=52597, avg=18183.29, stdev=8853.86 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[10683], 20.00th=[12387], 00:16:04.734 | 30.00th=[13304], 40.00th=[14484], 50.00th=[15401], 60.00th=[16909], 00:16:04.734 | 70.00th=[18744], 80.00th=[21890], 90.00th=[27132], 95.00th=[35914], 00:16:04.734 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:16:04.734 | 99.99th=[52691] 00:16:04.734 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:16:04.734 slat (usec): min=2, max=17360, avg=176.68, stdev=794.80 00:16:04.734 clat (usec): min=1833, max=73637, avg=25783.84, stdev=13856.58 00:16:04.734 lat (usec): min=1844, max=73645, avg=25960.52, stdev=13918.32 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 3556], 5.00th=[ 5538], 10.00th=[ 6849], 20.00th=[10945], 00:16:04.734 | 30.00th=[16712], 40.00th=[23725], 50.00th=[25822], 60.00th=[30278], 00:16:04.734 | 70.00th=[35390], 80.00th=[39060], 90.00th=[42730], 95.00th=[45876], 00:16:04.734 | 99.00th=[62653], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:16:04.734 | 99.99th=[73925] 00:16:04.734 bw ( KiB/s): min=12288, max=12288, per=29.01%, avg=12288.00, stdev= 0.00, samples=2 00:16:04.734 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:04.734 lat (msec) : 2=0.10%, 4=0.73%, 10=10.79%, 20=43.14%, 50=43.05% 00:16:04.734 lat (msec) : 100=2.19% 00:16:04.734 cpu : usr=1.89%, sys=2.88%, ctx=514, majf=0, minf=1 00:16:04.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:04.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.734 issued rwts: total=2691,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.734 job3: (groupid=0, jobs=1): err= 0: pid=2314919: Thu Jul 25 14:43:24 2024 00:16:04.734 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:16:04.734 slat (nsec): min=1571, max=14888k, avg=152644.82, stdev=914814.98 00:16:04.734 clat (usec): min=6425, max=60002, avg=17342.33, stdev=9787.43 00:16:04.734 lat (usec): min=6434, max=60012, avg=17494.97, stdev=9878.28 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 7242], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[10290], 00:16:04.734 | 30.00th=[11207], 40.00th=[12780], 50.00th=[14484], 60.00th=[16909], 00:16:04.734 | 70.00th=[18744], 80.00th=[20579], 90.00th=[29492], 95.00th=[40109], 00:16:04.734 | 99.00th=[52691], 99.50th=[54264], 99.90th=[60031], 99.95th=[60031], 00:16:04.734 | 99.99th=[60031] 00:16:04.734 write: IOPS=3591, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1007msec); 0 zone resets 00:16:04.734 slat (usec): min=2, max=7980, avg=115.00, stdev=538.00 00:16:04.734 clat (usec): min=1328, max=59937, avg=18004.48, stdev=11259.34 00:16:04.734 lat (usec): min=1358, max=59942, avg=18119.48, stdev=11318.78 00:16:04.734 clat percentiles (usec): 00:16:04.734 | 1.00th=[ 3949], 5.00th=[ 5866], 10.00th=[ 6783], 20.00th=[ 8356], 00:16:04.734 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13698], 60.00th=[15926], 00:16:04.734 | 70.00th=[21103], 80.00th=[26870], 90.00th=[37487], 95.00th=[41681], 00:16:04.734 | 99.00th=[47449], 99.50th=[49546], 99.90th=[57410], 99.95th=[57410], 00:16:04.734 | 99.99th=[60031] 00:16:04.734 bw ( KiB/s): min=10944, max=17728, per=33.84%, avg=14336.00, stdev=4797.01, samples=2 00:16:04.734 iops : min= 2736, max= 4432, avg=3584.00, stdev=1199.25, samples=2 00:16:04.734 lat (msec) : 2=0.04%, 4=0.51%, 10=20.21%, 20=52.02%, 50=25.93% 00:16:04.734 lat (msec) : 100=1.29% 00:16:04.734 cpu : usr=2.68%, sys=3.48%, ctx=551, majf=0, minf=1 00:16:04.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:04.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.734 issued rwts: total=3584,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.734 00:16:04.734 Run status group 0 (all jobs): 00:16:04.734 READ: bw=37.8MiB/s (39.6MB/s), 2010KiB/s-13.9MiB/s (2058kB/s-14.6MB/s), io=38.5MiB (40.4MB), run=1003-1019msec 00:16:04.734 WRITE: bw=41.4MiB/s (43.4MB/s), 3454KiB/s-14.0MiB/s (3537kB/s-14.7MB/s), io=42.2MiB (44.2MB), run=1003-1019msec 00:16:04.734 00:16:04.734 Disk stats (read/write): 00:16:04.734 nvme0n1: ios=562/799, merge=0/0, ticks=13684/38609, in_queue=52293, util=88.98% 00:16:04.734 nvme0n2: ios=2600/2633, merge=0/0, ticks=11648/14902, in_queue=26550, util=97.87% 00:16:04.734 nvme0n3: ios=2355/2560, merge=0/0, ticks=31009/57892, in_queue=88901, util=93.87% 00:16:04.734 nvme0n4: ios=3095/3271, merge=0/0, ticks=51532/53768, in_queue=105300, util=98.74% 00:16:04.734 14:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:04.734 14:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2315149 00:16:04.734 14:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:04.734 14:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:04.734 [global] 00:16:04.734 thread=1 00:16:04.734 invalidate=1 00:16:04.734 rw=read 00:16:04.734 time_based=1 00:16:04.734 runtime=10 00:16:04.734 ioengine=libaio 00:16:04.734 direct=1 00:16:04.734 bs=4096 00:16:04.734 iodepth=1 00:16:04.734 norandommap=1 00:16:04.734 numjobs=1 00:16:04.734 00:16:04.734 [job0] 00:16:04.734 filename=/dev/nvme0n1 00:16:04.734 [job1] 00:16:04.734 filename=/dev/nvme0n2 00:16:04.734 [job2] 00:16:04.734 filename=/dev/nvme0n3 00:16:04.734 [job3] 00:16:04.734 filename=/dev/nvme0n4 00:16:04.734 Could not set queue depth (nvme0n1) 00:16:04.734 Could not set queue depth (nvme0n2) 00:16:04.734 Could not set queue depth (nvme0n3) 00:16:04.734 Could not set queue depth (nvme0n4) 00:16:04.993 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.993 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.993 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.993 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.993 fio-3.35 00:16:04.993 Starting 4 threads 00:16:07.526 14:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:07.784 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4947968, buflen=4096 00:16:07.784 fio: pid=2315357, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.784 14:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:08.043 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9142272, buflen=4096 00:16:08.043 fio: pid=2315346, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:08.043 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.043 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:08.043 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=21233664, buflen=4096 00:16:08.043 fio: pid=2315304, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:08.044 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.044 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:08.303 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18903040, buflen=4096 00:16:08.303 fio: pid=2315314, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:08.303 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.303 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:08.303 00:16:08.303 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2315304: Thu Jul 25 14:43:28 2024 00:16:08.303 read: IOPS=1717, BW=6868KiB/s (7033kB/s)(20.2MiB/3019msec) 00:16:08.303 slat (usec): min=6, max=17909, avg=16.41, stdev=342.13 00:16:08.303 clat (usec): min=360, max=42110, avg=564.48, stdev=1043.99 00:16:08.303 lat (usec): min=369, max=42132, avg=580.89, stdev=1100.94 00:16:08.303 clat percentiles (usec): 00:16:08.303 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 482], 00:16:08.303 | 30.00th=[ 498], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 519], 00:16:08.303 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 644], 95.00th=[ 783], 00:16:08.303 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1631], 99.95th=[40633], 00:16:08.303 | 99.99th=[42206] 00:16:08.303 bw ( KiB/s): min= 5144, max= 7680, per=43.01%, avg=7064.00, stdev=1085.72, samples=5 00:16:08.303 iops : min= 1286, max= 1920, avg=1766.00, stdev=271.43, samples=5 00:16:08.303 lat (usec) : 500=31.53%, 750=62.39%, 1000=4.03% 00:16:08.303 lat (msec) : 2=1.93%, 4=0.02%, 50=0.08% 00:16:08.303 cpu : usr=0.27%, sys=1.82%, ctx=5190, majf=0, minf=1 00:16:08.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.303 issued rwts: total=5185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.303 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2315314: Thu Jul 25 14:43:28 2024 00:16:08.303 read: IOPS=1431, BW=5726KiB/s (5863kB/s)(18.0MiB/3224msec) 00:16:08.303 slat (usec): min=3, max=12959, avg=15.53, stdev=300.45 00:16:08.303 clat (usec): min=362, max=42081, avg=681.42, stdev=2504.27 00:16:08.303 lat (usec): min=369, max=54141, avg=696.94, stdev=2567.71 00:16:08.303 clat percentiles (usec): 00:16:08.303 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 478], 00:16:08.303 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 510], 00:16:08.303 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 619], 95.00th=[ 766], 00:16:08.303 | 99.00th=[ 1205], 99.50th=[ 1385], 99.90th=[42206], 99.95th=[42206], 00:16:08.303 | 99.99th=[42206] 00:16:08.303 bw ( KiB/s): min= 1472, max= 7800, per=37.27%, avg=6121.33, stdev=2373.71, samples=6 00:16:08.303 iops : min= 368, max= 1950, avg=1530.33, stdev=593.43, samples=6 00:16:08.303 lat (usec) : 500=43.35%, 750=51.23%, 1000=3.01% 00:16:08.303 lat (msec) : 2=1.95%, 4=0.04%, 10=0.02%, 50=0.37% 00:16:08.303 cpu : usr=0.50%, sys=1.33%, ctx=4623, majf=0, minf=1 00:16:08.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.303 issued rwts: total=4616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.303 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2315346: Thu Jul 25 14:43:28 2024 00:16:08.303 read: IOPS=785, BW=3141KiB/s (3217kB/s)(8928KiB/2842msec) 00:16:08.303 slat (nsec): min=6223, max=59045, avg=7552.24, stdev=1932.83 00:16:08.303 clat (usec): min=396, max=43078, avg=1264.23, stdev=5442.24 00:16:08.303 lat (usec): min=404, max=43101, avg=1271.78, stdev=5442.90 00:16:08.303 clat percentiles (usec): 00:16:08.303 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 453], 00:16:08.303 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 537], 00:16:08.303 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 685], 95.00th=[ 840], 00:16:08.303 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:16:08.303 | 99.99th=[43254] 00:16:08.303 bw ( KiB/s): min= 96, max= 7664, per=21.65%, avg=3556.80, stdev=3750.51, samples=5 00:16:08.304 iops : min= 24, max= 1916, avg=889.20, stdev=937.63, samples=5 00:16:08.304 lat (usec) : 500=28.97%, 750=63.05%, 1000=4.97% 00:16:08.304 lat (msec) : 2=1.16%, 4=0.04%, 50=1.75% 00:16:08.304 cpu : usr=0.14%, sys=0.84%, ctx=2234, majf=0, minf=1 00:16:08.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.304 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.304 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.304 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2315357: Thu Jul 25 14:43:28 2024 00:16:08.304 read: IOPS=459, BW=1837KiB/s (1881kB/s)(4832KiB/2631msec) 00:16:08.304 slat (nsec): min=6416, max=33292, avg=7982.69, stdev=3116.72 00:16:08.304 clat (usec): min=396, max=42981, avg=2167.83, stdev=8073.02 00:16:08.304 lat (usec): min=404, max=43003, avg=2175.81, stdev=8075.75 00:16:08.304 clat percentiles (usec): 00:16:08.304 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 429], 00:16:08.304 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 515], 60.00th=[ 529], 00:16:08.304 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 1188], 00:16:08.304 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:08.304 | 99.99th=[42730] 00:16:08.304 bw ( KiB/s): min= 96, max= 6536, per=11.73%, avg=1926.40, stdev=2740.81, samples=5 00:16:08.304 iops : min= 24, max= 1634, avg=481.60, stdev=685.20, samples=5 00:16:08.304 lat (usec) : 500=44.50%, 750=47.73%, 1000=1.82% 00:16:08.304 lat (msec) : 2=1.74%, 10=0.08%, 20=0.08%, 50=3.97% 00:16:08.304 cpu : usr=0.08%, sys=0.53%, ctx=1209, majf=0, minf=2 00:16:08.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.304 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.304 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.304 00:16:08.304 Run status group 0 (all jobs): 00:16:08.304 READ: bw=16.0MiB/s (16.8MB/s), 1837KiB/s-6868KiB/s (1881kB/s-7033kB/s), io=51.7MiB (54.2MB), run=2631-3224msec 00:16:08.304 00:16:08.304 Disk stats (read/write): 00:16:08.304 nvme0n1: ios=4946/0, merge=0/0, ticks=2740/0, in_queue=2740, util=93.69% 00:16:08.304 nvme0n2: ios=4644/0, merge=0/0, ticks=3191/0, in_queue=3191, util=98.54% 00:16:08.304 nvme0n3: ios=2231/0, merge=0/0, ticks=2768/0, in_queue=2768, util=96.10% 00:16:08.304 nvme0n4: ios=1205/0, merge=0/0, ticks=2502/0, in_queue=2502, util=96.35% 00:16:08.563 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.563 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:08.822 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.822 14:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:08.822 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.822 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:09.081 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.081 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2315149 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:09.340 nvmf hotplug test: fio failed as expected 00:16:09.340 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.598 rmmod nvme_tcp 00:16:09.598 rmmod nvme_fabrics 00:16:09.598 rmmod nvme_keyring 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2312353 ']' 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2312353 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2312353 ']' 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2312353 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2312353 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2312353' 00:16:09.598 killing process with pid 2312353 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2312353 00:16:09.598 14:43:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2312353 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.857 14:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.454 14:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.454 00:16:12.454 real 0m26.216s 00:16:12.454 user 1m46.138s 00:16:12.454 sys 0m7.448s 00:16:12.454 14:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.454 14:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.454 ************************************ 00:16:12.454 END TEST nvmf_fio_target 00:16:12.454 ************************************ 00:16:12.454 14:43:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.454 14:43:32 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:12.454 14:43:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.454 14:43:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.454 14:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.454 ************************************ 00:16:12.454 START TEST nvmf_bdevio 00:16:12.454 ************************************ 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:12.454 * Looking for test storage... 00:16:12.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.454 14:43:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.455 14:43:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.729 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:17.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:17.730 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:17.730 Found net devices under 0000:86:00.0: cvl_0_0 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:17.730 Found net devices under 0000:86:00.1: cvl_0_1 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:16:17.730 00:16:17.730 --- 10.0.0.2 ping statistics --- 00:16:17.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.730 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:16:17.730 00:16:17.730 --- 10.0.0.1 ping statistics --- 00:16:17.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.730 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2319632 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2319632 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2319632 ']' 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.730 14:43:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 [2024-07-25 14:43:37.861115] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:16:17.730 [2024-07-25 14:43:37.861164] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.730 [2024-07-25 14:43:37.922910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.730 [2024-07-25 14:43:38.000883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.730 [2024-07-25 14:43:38.000923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.730 [2024-07-25 14:43:38.000930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.730 [2024-07-25 14:43:38.000937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.730 [2024-07-25 14:43:38.000942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.730 [2024-07-25 14:43:38.001070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:17.730 [2024-07-25 14:43:38.001197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:17.730 [2024-07-25 14:43:38.001303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.730 [2024-07-25 14:43:38.001305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 [2024-07-25 14:43:38.715070] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 Malloc0 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 [2024-07-25 14:43:38.766816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.664 { 00:16:18.664 "params": { 00:16:18.664 "name": "Nvme$subsystem", 00:16:18.664 "trtype": "$TEST_TRANSPORT", 00:16:18.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.664 "adrfam": "ipv4", 00:16:18.664 "trsvcid": "$NVMF_PORT", 00:16:18.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.664 "hdgst": ${hdgst:-false}, 00:16:18.664 "ddgst": ${ddgst:-false} 00:16:18.664 }, 00:16:18.664 "method": "bdev_nvme_attach_controller" 00:16:18.664 } 00:16:18.664 EOF 00:16:18.664 )") 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:18.664 14:43:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.664 "params": { 00:16:18.664 "name": "Nvme1", 00:16:18.664 "trtype": "tcp", 00:16:18.664 "traddr": "10.0.0.2", 00:16:18.664 "adrfam": "ipv4", 00:16:18.664 "trsvcid": "4420", 00:16:18.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.664 "hdgst": false, 00:16:18.664 "ddgst": false 00:16:18.664 }, 00:16:18.664 "method": "bdev_nvme_attach_controller" 00:16:18.664 }' 00:16:18.664 [2024-07-25 14:43:38.813427] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:16:18.664 [2024-07-25 14:43:38.813472] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319776 ] 00:16:18.664 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.664 [2024-07-25 14:43:38.869654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.664 [2024-07-25 14:43:38.945552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.664 [2024-07-25 14:43:38.945649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.664 [2024-07-25 14:43:38.945649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.922 I/O targets: 00:16:18.922 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.922 00:16:18.922 00:16:18.922 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.922 http://cunit.sourceforge.net/ 00:16:18.922 00:16:18.922 00:16:18.922 Suite: bdevio tests on: Nvme1n1 00:16:18.922 Test: blockdev write read block ...passed 00:16:19.180 Test: blockdev write zeroes read block ...passed 00:16:19.180 Test: blockdev write zeroes read no split ...passed 00:16:19.180 Test: blockdev write zeroes read split ...passed 00:16:19.180 Test: blockdev write zeroes read split partial ...passed 00:16:19.180 Test: blockdev reset ...[2024-07-25 14:43:39.340846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:19.180 [2024-07-25 14:43:39.340910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc96d0 (9): Bad file descriptor 00:16:19.438 [2024-07-25 14:43:39.485606] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:19.438 passed 00:16:19.438 Test: blockdev write read 8 blocks ...passed 00:16:19.438 Test: blockdev write read size > 128k ...passed 00:16:19.438 Test: blockdev write read invalid size ...passed 00:16:19.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:19.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:19.438 Test: blockdev write read max offset ...passed 00:16:19.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:19.438 Test: blockdev writev readv 8 blocks ...passed 00:16:19.438 Test: blockdev writev readv 30 x 1block ...passed 00:16:19.438 Test: blockdev writev readv block ...passed 00:16:19.438 Test: blockdev writev readv size > 128k ...passed 00:16:19.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:19.438 Test: blockdev comparev and writev ...[2024-07-25 14:43:39.724330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.724358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.724376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.724383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.724900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.724911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.724923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.724930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.725410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.725420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.725432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.725439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.725903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.725913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:19.438 [2024-07-25 14:43:39.725925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:19.438 [2024-07-25 14:43:39.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:19.696 passed 00:16:19.696 Test: blockdev nvme passthru rw ...passed 00:16:19.696 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:43:39.809946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.696 [2024-07-25 14:43:39.809960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:19.696 [2024-07-25 14:43:39.810350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.696 [2024-07-25 14:43:39.810361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:19.696 [2024-07-25 14:43:39.810752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.696 [2024-07-25 14:43:39.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:19.696 [2024-07-25 14:43:39.811159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:19.696 [2024-07-25 14:43:39.811169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:19.696 passed 00:16:19.696 Test: blockdev nvme admin passthru ...passed 00:16:19.696 Test: blockdev copy ...passed 00:16:19.696 00:16:19.696 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.696 suites 1 1 n/a 0 0 00:16:19.696 tests 23 23 23 0 0 00:16:19.696 asserts 152 152 152 0 n/a 00:16:19.696 00:16:19.696 Elapsed time = 1.433 seconds 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.954 rmmod nvme_tcp 00:16:19.954 rmmod nvme_fabrics 00:16:19.954 rmmod nvme_keyring 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2319632 ']' 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2319632 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2319632 ']' 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2319632 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2319632 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2319632' 00:16:19.954 killing process with pid 2319632 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2319632 00:16:19.954 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2319632 00:16:20.213 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.213 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.214 14:43:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.123 14:43:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.382 00:16:22.382 real 0m10.218s 00:16:22.382 user 0m13.347s 00:16:22.382 sys 0m4.638s 00:16:22.382 14:43:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.382 14:43:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.382 ************************************ 00:16:22.382 END TEST nvmf_bdevio 00:16:22.382 ************************************ 00:16:22.382 14:43:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.382 14:43:42 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:22.382 14:43:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.382 14:43:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.382 14:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.382 ************************************ 00:16:22.382 START TEST nvmf_auth_target 00:16:22.382 ************************************ 00:16:22.382 14:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:22.382 * Looking for test storage... 00:16:22.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.382 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.382 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.383 14:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:27.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:27.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:27.654 Found net devices under 0000:86:00.0: cvl_0_0 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.654 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:27.655 Found net devices under 0000:86:00.1: cvl_0_1 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.655 14:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:16:27.914 00:16:27.914 --- 10.0.0.2 ping statistics --- 00:16:27.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.914 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.599 ms 00:16:27.914 00:16:27.914 --- 10.0.0.1 ping statistics --- 00:16:27.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.914 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.914 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2323517 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2323517 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2323517 ']' 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.173 14:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2323671 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3162e0203a4a054caecbef797aa73bcb51fbfa63e44491f2 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.viT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3162e0203a4a054caecbef797aa73bcb51fbfa63e44491f2 0 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3162e0203a4a054caecbef797aa73bcb51fbfa63e44491f2 0 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3162e0203a4a054caecbef797aa73bcb51fbfa63e44491f2 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.viT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.viT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.viT 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=86533f205bcbb7d048ceb759be2a7a04e5cd1e600e83c09d26f5e932b3762a01 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uBD 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 86533f205bcbb7d048ceb759be2a7a04e5cd1e600e83c09d26f5e932b3762a01 3 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 86533f205bcbb7d048ceb759be2a7a04e5cd1e600e83c09d26f5e932b3762a01 3 00:16:29.107 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=86533f205bcbb7d048ceb759be2a7a04e5cd1e600e83c09d26f5e932b3762a01 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uBD 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uBD 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.uBD 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ecd6ba5bea0772bf1dc318a7ed091575 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GJu 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ecd6ba5bea0772bf1dc318a7ed091575 1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ecd6ba5bea0772bf1dc318a7ed091575 1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ecd6ba5bea0772bf1dc318a7ed091575 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GJu 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GJu 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.GJu 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b442803b832bec34693938bc8221ffacccc6eb8074d0000f 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0Yv 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b442803b832bec34693938bc8221ffacccc6eb8074d0000f 2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b442803b832bec34693938bc8221ffacccc6eb8074d0000f 2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b442803b832bec34693938bc8221ffacccc6eb8074d0000f 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0Yv 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0Yv 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.0Yv 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=917e2e57e56623b51aca628cd9fb1ea4233ee97ab4ee1d9a 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RFU 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 917e2e57e56623b51aca628cd9fb1ea4233ee97ab4ee1d9a 2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 917e2e57e56623b51aca628cd9fb1ea4233ee97ab4ee1d9a 2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=917e2e57e56623b51aca628cd9fb1ea4233ee97ab4ee1d9a 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:29.108 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RFU 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RFU 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.RFU 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f10ec0edef238f4670f0a6a5f631cefd 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0wv 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f10ec0edef238f4670f0a6a5f631cefd 1 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f10ec0edef238f4670f0a6a5f631cefd 1 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f10ec0edef238f4670f0a6a5f631cefd 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0wv 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0wv 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.0wv 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fb2cfc239b5c6354e37bc97fc211c0d69d00e015f0447a2b36db297c22ac9326 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wjf 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fb2cfc239b5c6354e37bc97fc211c0d69d00e015f0447a2b36db297c22ac9326 3 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fb2cfc239b5c6354e37bc97fc211c0d69d00e015f0447a2b36db297c22ac9326 3 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fb2cfc239b5c6354e37bc97fc211c0d69d00e015f0447a2b36db297c22ac9326 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wjf 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wjf 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Wjf 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2323517 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2323517 ']' 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.367 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2323671 /var/tmp/host.sock 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2323671 ']' 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:29.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.viT 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.viT 00:16:29.626 14:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.viT 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.uBD ]] 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uBD 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uBD 00:16:29.885 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uBD 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GJu 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GJu 00:16:30.144 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GJu 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.0Yv ]] 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Yv 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Yv 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Yv 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RFU 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RFU 00:16:30.403 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RFU 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.0wv ]] 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0wv 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0wv 00:16:30.661 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0wv 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Wjf 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Wjf 00:16:30.919 14:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Wjf 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.919 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.177 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.435 00:16:31.435 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.435 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.435 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.693 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.693 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.694 { 00:16:31.694 "cntlid": 1, 00:16:31.694 "qid": 0, 00:16:31.694 "state": "enabled", 00:16:31.694 "thread": "nvmf_tgt_poll_group_000", 00:16:31.694 "listen_address": { 00:16:31.694 "trtype": "TCP", 00:16:31.694 "adrfam": "IPv4", 00:16:31.694 "traddr": "10.0.0.2", 00:16:31.694 "trsvcid": "4420" 00:16:31.694 }, 00:16:31.694 "peer_address": { 00:16:31.694 "trtype": "TCP", 00:16:31.694 "adrfam": "IPv4", 00:16:31.694 "traddr": "10.0.0.1", 00:16:31.694 "trsvcid": "46620" 00:16:31.694 }, 00:16:31.694 "auth": { 00:16:31.694 "state": "completed", 00:16:31.694 "digest": "sha256", 00:16:31.694 "dhgroup": "null" 00:16:31.694 } 00:16:31.694 } 00:16:31.694 ]' 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.694 14:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.951 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.551 14:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.810 00:16:32.810 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.810 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.810 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.069 { 00:16:33.069 "cntlid": 3, 00:16:33.069 "qid": 0, 00:16:33.069 "state": "enabled", 00:16:33.069 "thread": "nvmf_tgt_poll_group_000", 00:16:33.069 "listen_address": { 00:16:33.069 "trtype": "TCP", 00:16:33.069 "adrfam": "IPv4", 00:16:33.069 "traddr": "10.0.0.2", 00:16:33.069 "trsvcid": "4420" 00:16:33.069 }, 00:16:33.069 "peer_address": { 00:16:33.069 "trtype": "TCP", 00:16:33.069 "adrfam": "IPv4", 00:16:33.069 "traddr": "10.0.0.1", 00:16:33.069 "trsvcid": "46632" 00:16:33.069 }, 00:16:33.069 "auth": { 00:16:33.069 "state": "completed", 00:16:33.069 "digest": "sha256", 00:16:33.069 "dhgroup": "null" 00:16:33.069 } 00:16:33.069 } 00:16:33.069 ]' 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:33.069 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.326 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.326 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.326 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.326 14:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.891 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.149 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.407 00:16:34.407 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.407 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.408 { 00:16:34.408 "cntlid": 5, 00:16:34.408 "qid": 0, 00:16:34.408 "state": "enabled", 00:16:34.408 "thread": "nvmf_tgt_poll_group_000", 00:16:34.408 "listen_address": { 00:16:34.408 "trtype": "TCP", 00:16:34.408 "adrfam": "IPv4", 00:16:34.408 "traddr": "10.0.0.2", 00:16:34.408 "trsvcid": "4420" 00:16:34.408 }, 00:16:34.408 "peer_address": { 00:16:34.408 "trtype": "TCP", 00:16:34.408 "adrfam": "IPv4", 00:16:34.408 "traddr": "10.0.0.1", 00:16:34.408 "trsvcid": "46646" 00:16:34.408 }, 00:16:34.408 "auth": { 00:16:34.408 "state": "completed", 00:16:34.408 "digest": "sha256", 00:16:34.408 "dhgroup": "null" 00:16:34.408 } 00:16:34.408 } 00:16:34.408 ]' 00:16:34.408 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.666 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.925 14:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.491 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.750 00:16:35.750 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.750 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.750 14:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.008 { 00:16:36.008 "cntlid": 7, 00:16:36.008 "qid": 0, 00:16:36.008 "state": "enabled", 00:16:36.008 "thread": "nvmf_tgt_poll_group_000", 00:16:36.008 "listen_address": { 00:16:36.008 "trtype": "TCP", 00:16:36.008 "adrfam": "IPv4", 00:16:36.008 "traddr": "10.0.0.2", 00:16:36.008 "trsvcid": "4420" 00:16:36.008 }, 00:16:36.008 "peer_address": { 00:16:36.008 "trtype": "TCP", 00:16:36.008 "adrfam": "IPv4", 00:16:36.008 "traddr": "10.0.0.1", 00:16:36.008 "trsvcid": "46674" 00:16:36.008 }, 00:16:36.008 "auth": { 00:16:36.008 "state": "completed", 00:16:36.008 "digest": "sha256", 00:16:36.008 "dhgroup": "null" 00:16:36.008 } 00:16:36.008 } 00:16:36.008 ]' 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.008 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.265 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.832 14:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.090 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.090 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.348 { 00:16:37.348 "cntlid": 9, 00:16:37.348 "qid": 0, 00:16:37.348 "state": "enabled", 00:16:37.348 "thread": "nvmf_tgt_poll_group_000", 00:16:37.348 "listen_address": { 00:16:37.348 "trtype": "TCP", 00:16:37.348 "adrfam": "IPv4", 00:16:37.348 "traddr": "10.0.0.2", 00:16:37.348 "trsvcid": "4420" 00:16:37.348 }, 00:16:37.348 "peer_address": { 00:16:37.348 "trtype": "TCP", 00:16:37.348 "adrfam": "IPv4", 00:16:37.348 "traddr": "10.0.0.1", 00:16:37.348 "trsvcid": "52594" 00:16:37.348 }, 00:16:37.348 "auth": { 00:16:37.348 "state": "completed", 00:16:37.348 "digest": "sha256", 00:16:37.348 "dhgroup": "ffdhe2048" 00:16:37.348 } 00:16:37.348 } 00:16:37.348 ]' 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.348 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.605 14:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.170 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.427 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.685 00:16:38.685 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.685 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.685 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.943 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.943 14:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.943 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.943 14:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.943 { 00:16:38.943 "cntlid": 11, 00:16:38.943 "qid": 0, 00:16:38.943 "state": "enabled", 00:16:38.943 "thread": "nvmf_tgt_poll_group_000", 00:16:38.943 "listen_address": { 00:16:38.943 "trtype": "TCP", 00:16:38.943 "adrfam": "IPv4", 00:16:38.943 "traddr": "10.0.0.2", 00:16:38.943 "trsvcid": "4420" 00:16:38.943 }, 00:16:38.943 "peer_address": { 00:16:38.943 "trtype": "TCP", 00:16:38.943 "adrfam": "IPv4", 00:16:38.943 "traddr": "10.0.0.1", 00:16:38.943 "trsvcid": "52622" 00:16:38.943 }, 00:16:38.943 "auth": { 00:16:38.943 "state": "completed", 00:16:38.943 "digest": "sha256", 00:16:38.943 "dhgroup": "ffdhe2048" 00:16:38.943 } 00:16:38.943 } 00:16:38.943 ]' 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.943 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.200 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.766 14:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.766 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.024 00:16:40.024 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.024 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.024 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.281 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.281 { 00:16:40.281 "cntlid": 13, 00:16:40.281 "qid": 0, 00:16:40.281 "state": "enabled", 00:16:40.281 "thread": "nvmf_tgt_poll_group_000", 00:16:40.281 "listen_address": { 00:16:40.281 "trtype": "TCP", 00:16:40.281 "adrfam": "IPv4", 00:16:40.281 "traddr": "10.0.0.2", 00:16:40.281 "trsvcid": "4420" 00:16:40.281 }, 00:16:40.281 "peer_address": { 00:16:40.281 "trtype": "TCP", 00:16:40.281 "adrfam": "IPv4", 00:16:40.282 "traddr": "10.0.0.1", 00:16:40.282 "trsvcid": "52656" 00:16:40.282 }, 00:16:40.282 "auth": { 00:16:40.282 "state": "completed", 00:16:40.282 "digest": "sha256", 00:16:40.282 "dhgroup": "ffdhe2048" 00:16:40.282 } 00:16:40.282 } 00:16:40.282 ]' 00:16:40.282 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.282 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.282 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.282 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.282 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.540 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.540 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.540 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.540 14:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.105 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.363 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.364 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.621 00:16:41.621 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.621 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.621 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.879 { 00:16:41.879 "cntlid": 15, 00:16:41.879 "qid": 0, 00:16:41.879 "state": "enabled", 00:16:41.879 "thread": "nvmf_tgt_poll_group_000", 00:16:41.879 "listen_address": { 00:16:41.879 "trtype": "TCP", 00:16:41.879 "adrfam": "IPv4", 00:16:41.879 "traddr": "10.0.0.2", 00:16:41.879 "trsvcid": "4420" 00:16:41.879 }, 00:16:41.879 "peer_address": { 00:16:41.879 "trtype": "TCP", 00:16:41.879 "adrfam": "IPv4", 00:16:41.879 "traddr": "10.0.0.1", 00:16:41.879 "trsvcid": "52686" 00:16:41.879 }, 00:16:41.879 "auth": { 00:16:41.879 "state": "completed", 00:16:41.879 "digest": "sha256", 00:16:41.879 "dhgroup": "ffdhe2048" 00:16:41.879 } 00:16:41.879 } 00:16:41.879 ]' 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.879 14:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.879 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.879 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.879 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.879 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.879 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.136 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.702 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.703 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.703 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.703 14:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.703 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.703 14:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.960 00:16:42.960 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.960 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.960 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.217 { 00:16:43.217 "cntlid": 17, 00:16:43.217 "qid": 0, 00:16:43.217 "state": "enabled", 00:16:43.217 "thread": "nvmf_tgt_poll_group_000", 00:16:43.217 "listen_address": { 00:16:43.217 "trtype": "TCP", 00:16:43.217 "adrfam": "IPv4", 00:16:43.217 "traddr": "10.0.0.2", 00:16:43.217 "trsvcid": "4420" 00:16:43.217 }, 00:16:43.217 "peer_address": { 00:16:43.217 "trtype": "TCP", 00:16:43.217 "adrfam": "IPv4", 00:16:43.217 "traddr": "10.0.0.1", 00:16:43.217 "trsvcid": "52712" 00:16:43.217 }, 00:16:43.217 "auth": { 00:16:43.217 "state": "completed", 00:16:43.217 "digest": "sha256", 00:16:43.217 "dhgroup": "ffdhe3072" 00:16:43.217 } 00:16:43.217 } 00:16:43.217 ]' 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.217 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.474 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.474 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.474 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.474 14:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.039 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.297 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.555 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.555 { 00:16:44.555 "cntlid": 19, 00:16:44.555 "qid": 0, 00:16:44.555 "state": "enabled", 00:16:44.555 "thread": "nvmf_tgt_poll_group_000", 00:16:44.555 "listen_address": { 00:16:44.555 "trtype": "TCP", 00:16:44.555 "adrfam": "IPv4", 00:16:44.555 "traddr": "10.0.0.2", 00:16:44.555 "trsvcid": "4420" 00:16:44.555 }, 00:16:44.555 "peer_address": { 00:16:44.555 "trtype": "TCP", 00:16:44.555 "adrfam": "IPv4", 00:16:44.555 "traddr": "10.0.0.1", 00:16:44.555 "trsvcid": "52744" 00:16:44.555 }, 00:16:44.555 "auth": { 00:16:44.555 "state": "completed", 00:16:44.555 "digest": "sha256", 00:16:44.555 "dhgroup": "ffdhe3072" 00:16:44.555 } 00:16:44.555 } 00:16:44.555 ]' 00:16:44.555 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.813 14:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.070 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.636 14:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.894 00:16:45.894 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.894 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.894 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.152 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.152 { 00:16:46.152 "cntlid": 21, 00:16:46.152 "qid": 0, 00:16:46.152 "state": "enabled", 00:16:46.153 "thread": "nvmf_tgt_poll_group_000", 00:16:46.153 "listen_address": { 00:16:46.153 "trtype": "TCP", 00:16:46.153 "adrfam": "IPv4", 00:16:46.153 "traddr": "10.0.0.2", 00:16:46.153 "trsvcid": "4420" 00:16:46.153 }, 00:16:46.153 "peer_address": { 00:16:46.153 "trtype": "TCP", 00:16:46.153 "adrfam": "IPv4", 00:16:46.153 "traddr": "10.0.0.1", 00:16:46.153 "trsvcid": "52778" 00:16:46.153 }, 00:16:46.153 "auth": { 00:16:46.153 "state": "completed", 00:16:46.153 "digest": "sha256", 00:16:46.153 "dhgroup": "ffdhe3072" 00:16:46.153 } 00:16:46.153 } 00:16:46.153 ]' 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.153 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.454 14:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.023 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.281 00:16:47.281 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.281 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.281 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.539 { 00:16:47.539 "cntlid": 23, 00:16:47.539 "qid": 0, 00:16:47.539 "state": "enabled", 00:16:47.539 "thread": "nvmf_tgt_poll_group_000", 00:16:47.539 "listen_address": { 00:16:47.539 "trtype": "TCP", 00:16:47.539 "adrfam": "IPv4", 00:16:47.539 "traddr": "10.0.0.2", 00:16:47.539 "trsvcid": "4420" 00:16:47.539 }, 00:16:47.539 "peer_address": { 00:16:47.539 "trtype": "TCP", 00:16:47.539 "adrfam": "IPv4", 00:16:47.539 "traddr": "10.0.0.1", 00:16:47.539 "trsvcid": "55578" 00:16:47.539 }, 00:16:47.539 "auth": { 00:16:47.539 "state": "completed", 00:16:47.539 "digest": "sha256", 00:16:47.539 "dhgroup": "ffdhe3072" 00:16:47.539 } 00:16:47.539 } 00:16:47.539 ]' 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.539 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.797 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.797 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.797 14:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.797 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.364 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.622 14:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.881 00:16:48.881 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.881 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.881 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.140 { 00:16:49.140 "cntlid": 25, 00:16:49.140 "qid": 0, 00:16:49.140 "state": "enabled", 00:16:49.140 "thread": "nvmf_tgt_poll_group_000", 00:16:49.140 "listen_address": { 00:16:49.140 "trtype": "TCP", 00:16:49.140 "adrfam": "IPv4", 00:16:49.140 "traddr": "10.0.0.2", 00:16:49.140 "trsvcid": "4420" 00:16:49.140 }, 00:16:49.140 "peer_address": { 00:16:49.140 "trtype": "TCP", 00:16:49.140 "adrfam": "IPv4", 00:16:49.140 "traddr": "10.0.0.1", 00:16:49.140 "trsvcid": "55616" 00:16:49.140 }, 00:16:49.140 "auth": { 00:16:49.140 "state": "completed", 00:16:49.140 "digest": "sha256", 00:16:49.140 "dhgroup": "ffdhe4096" 00:16:49.140 } 00:16:49.140 } 00:16:49.140 ]' 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.140 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.398 14:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.965 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.224 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.483 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.483 { 00:16:50.483 "cntlid": 27, 00:16:50.483 "qid": 0, 00:16:50.483 "state": "enabled", 00:16:50.483 "thread": "nvmf_tgt_poll_group_000", 00:16:50.483 "listen_address": { 00:16:50.483 "trtype": "TCP", 00:16:50.483 "adrfam": "IPv4", 00:16:50.483 "traddr": "10.0.0.2", 00:16:50.483 "trsvcid": "4420" 00:16:50.483 }, 00:16:50.483 "peer_address": { 00:16:50.483 "trtype": "TCP", 00:16:50.483 "adrfam": "IPv4", 00:16:50.483 "traddr": "10.0.0.1", 00:16:50.483 "trsvcid": "55656" 00:16:50.483 }, 00:16:50.483 "auth": { 00:16:50.483 "state": "completed", 00:16:50.483 "digest": "sha256", 00:16:50.483 "dhgroup": "ffdhe4096" 00:16:50.483 } 00:16:50.483 } 00:16:50.483 ]' 00:16:50.483 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.742 14:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.001 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.569 14:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.828 00:16:51.828 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.828 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.828 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.087 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.087 { 00:16:52.087 "cntlid": 29, 00:16:52.087 "qid": 0, 00:16:52.087 "state": "enabled", 00:16:52.087 "thread": "nvmf_tgt_poll_group_000", 00:16:52.087 "listen_address": { 00:16:52.087 "trtype": "TCP", 00:16:52.087 "adrfam": "IPv4", 00:16:52.087 "traddr": "10.0.0.2", 00:16:52.087 "trsvcid": "4420" 00:16:52.087 }, 00:16:52.087 "peer_address": { 00:16:52.087 "trtype": "TCP", 00:16:52.087 "adrfam": "IPv4", 00:16:52.087 "traddr": "10.0.0.1", 00:16:52.087 "trsvcid": "55682" 00:16:52.087 }, 00:16:52.087 "auth": { 00:16:52.087 "state": "completed", 00:16:52.087 "digest": "sha256", 00:16:52.088 "dhgroup": "ffdhe4096" 00:16:52.088 } 00:16:52.088 } 00:16:52.088 ]' 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.088 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.346 14:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.913 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.173 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.431 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.431 { 00:16:53.431 "cntlid": 31, 00:16:53.431 "qid": 0, 00:16:53.431 "state": "enabled", 00:16:53.431 "thread": "nvmf_tgt_poll_group_000", 00:16:53.431 "listen_address": { 00:16:53.431 "trtype": "TCP", 00:16:53.431 "adrfam": "IPv4", 00:16:53.431 "traddr": "10.0.0.2", 00:16:53.431 "trsvcid": "4420" 00:16:53.431 }, 00:16:53.431 "peer_address": { 00:16:53.431 "trtype": "TCP", 00:16:53.431 "adrfam": "IPv4", 00:16:53.431 "traddr": "10.0.0.1", 00:16:53.431 "trsvcid": "55710" 00:16:53.431 }, 00:16:53.431 "auth": { 00:16:53.431 "state": "completed", 00:16:53.431 "digest": "sha256", 00:16:53.431 "dhgroup": "ffdhe4096" 00:16:53.431 } 00:16:53.431 } 00:16:53.431 ]' 00:16:53.431 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.690 14:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.948 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.516 14:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.083 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.083 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.084 { 00:16:55.084 "cntlid": 33, 00:16:55.084 "qid": 0, 00:16:55.084 "state": "enabled", 00:16:55.084 "thread": "nvmf_tgt_poll_group_000", 00:16:55.084 "listen_address": { 00:16:55.084 "trtype": "TCP", 00:16:55.084 "adrfam": "IPv4", 00:16:55.084 "traddr": "10.0.0.2", 00:16:55.084 "trsvcid": "4420" 00:16:55.084 }, 00:16:55.084 "peer_address": { 00:16:55.084 "trtype": "TCP", 00:16:55.084 "adrfam": "IPv4", 00:16:55.084 "traddr": "10.0.0.1", 00:16:55.084 "trsvcid": "55736" 00:16:55.084 }, 00:16:55.084 "auth": { 00:16:55.084 "state": "completed", 00:16:55.084 "digest": "sha256", 00:16:55.084 "dhgroup": "ffdhe6144" 00:16:55.084 } 00:16:55.084 } 00:16:55.084 ]' 00:16:55.084 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.084 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.084 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.084 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.084 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.342 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.342 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.342 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.342 14:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.910 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.169 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.427 00:16:56.427 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.428 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.428 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.686 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.686 { 00:16:56.686 "cntlid": 35, 00:16:56.686 "qid": 0, 00:16:56.686 "state": "enabled", 00:16:56.686 "thread": "nvmf_tgt_poll_group_000", 00:16:56.686 "listen_address": { 00:16:56.686 "trtype": "TCP", 00:16:56.686 "adrfam": "IPv4", 00:16:56.686 "traddr": "10.0.0.2", 00:16:56.686 "trsvcid": "4420" 00:16:56.686 }, 00:16:56.686 "peer_address": { 00:16:56.686 "trtype": "TCP", 00:16:56.686 "adrfam": "IPv4", 00:16:56.686 "traddr": "10.0.0.1", 00:16:56.686 "trsvcid": "55942" 00:16:56.686 }, 00:16:56.686 "auth": { 00:16:56.686 "state": "completed", 00:16:56.687 "digest": "sha256", 00:16:56.687 "dhgroup": "ffdhe6144" 00:16:56.687 } 00:16:56.687 } 00:16:56.687 ]' 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.687 14:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.944 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.509 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.767 14:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.026 00:16:58.026 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.026 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.026 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.285 { 00:16:58.285 "cntlid": 37, 00:16:58.285 "qid": 0, 00:16:58.285 "state": "enabled", 00:16:58.285 "thread": "nvmf_tgt_poll_group_000", 00:16:58.285 "listen_address": { 00:16:58.285 "trtype": "TCP", 00:16:58.285 "adrfam": "IPv4", 00:16:58.285 "traddr": "10.0.0.2", 00:16:58.285 "trsvcid": "4420" 00:16:58.285 }, 00:16:58.285 "peer_address": { 00:16:58.285 "trtype": "TCP", 00:16:58.285 "adrfam": "IPv4", 00:16:58.285 "traddr": "10.0.0.1", 00:16:58.285 "trsvcid": "55968" 00:16:58.285 }, 00:16:58.285 "auth": { 00:16:58.285 "state": "completed", 00:16:58.285 "digest": "sha256", 00:16:58.285 "dhgroup": "ffdhe6144" 00:16:58.285 } 00:16:58.285 } 00:16:58.285 ]' 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.285 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.544 14:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.111 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.371 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.630 00:16:59.630 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.630 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.630 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.888 { 00:16:59.888 "cntlid": 39, 00:16:59.888 "qid": 0, 00:16:59.888 "state": "enabled", 00:16:59.888 "thread": "nvmf_tgt_poll_group_000", 00:16:59.888 "listen_address": { 00:16:59.888 "trtype": "TCP", 00:16:59.888 "adrfam": "IPv4", 00:16:59.888 "traddr": "10.0.0.2", 00:16:59.888 "trsvcid": "4420" 00:16:59.888 }, 00:16:59.888 "peer_address": { 00:16:59.888 "trtype": "TCP", 00:16:59.888 "adrfam": "IPv4", 00:16:59.888 "traddr": "10.0.0.1", 00:16:59.888 "trsvcid": "55982" 00:16:59.888 }, 00:16:59.888 "auth": { 00:16:59.888 "state": "completed", 00:16:59.888 "digest": "sha256", 00:16:59.888 "dhgroup": "ffdhe6144" 00:16:59.888 } 00:16:59.888 } 00:16:59.888 ]' 00:16:59.888 14:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.888 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.147 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.768 14:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.768 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.335 00:17:01.335 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.335 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.335 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.594 { 00:17:01.594 "cntlid": 41, 00:17:01.594 "qid": 0, 00:17:01.594 "state": "enabled", 00:17:01.594 "thread": "nvmf_tgt_poll_group_000", 00:17:01.594 "listen_address": { 00:17:01.594 "trtype": "TCP", 00:17:01.594 "adrfam": "IPv4", 00:17:01.594 "traddr": "10.0.0.2", 00:17:01.594 "trsvcid": "4420" 00:17:01.594 }, 00:17:01.594 "peer_address": { 00:17:01.594 "trtype": "TCP", 00:17:01.594 "adrfam": "IPv4", 00:17:01.594 "traddr": "10.0.0.1", 00:17:01.594 "trsvcid": "56010" 00:17:01.594 }, 00:17:01.594 "auth": { 00:17:01.594 "state": "completed", 00:17:01.594 "digest": "sha256", 00:17:01.594 "dhgroup": "ffdhe8192" 00:17:01.594 } 00:17:01.594 } 00:17:01.594 ]' 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.594 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.854 14:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.423 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.424 14:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.994 00:17:02.994 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.994 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.994 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.253 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.254 { 00:17:03.254 "cntlid": 43, 00:17:03.254 "qid": 0, 00:17:03.254 "state": "enabled", 00:17:03.254 "thread": "nvmf_tgt_poll_group_000", 00:17:03.254 "listen_address": { 00:17:03.254 "trtype": "TCP", 00:17:03.254 "adrfam": "IPv4", 00:17:03.254 "traddr": "10.0.0.2", 00:17:03.254 "trsvcid": "4420" 00:17:03.254 }, 00:17:03.254 "peer_address": { 00:17:03.254 "trtype": "TCP", 00:17:03.254 "adrfam": "IPv4", 00:17:03.254 "traddr": "10.0.0.1", 00:17:03.254 "trsvcid": "56044" 00:17:03.254 }, 00:17:03.254 "auth": { 00:17:03.254 "state": "completed", 00:17:03.254 "digest": "sha256", 00:17:03.254 "dhgroup": "ffdhe8192" 00:17:03.254 } 00:17:03.254 } 00:17:03.254 ]' 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.254 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.513 14:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.082 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.343 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.602 00:17:04.602 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.602 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.602 14:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.861 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.861 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.861 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.861 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.120 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.120 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.120 { 00:17:05.120 "cntlid": 45, 00:17:05.120 "qid": 0, 00:17:05.120 "state": "enabled", 00:17:05.120 "thread": "nvmf_tgt_poll_group_000", 00:17:05.120 "listen_address": { 00:17:05.120 "trtype": "TCP", 00:17:05.120 "adrfam": "IPv4", 00:17:05.120 "traddr": "10.0.0.2", 00:17:05.120 "trsvcid": "4420" 00:17:05.120 }, 00:17:05.120 "peer_address": { 00:17:05.120 "trtype": "TCP", 00:17:05.120 "adrfam": "IPv4", 00:17:05.120 "traddr": "10.0.0.1", 00:17:05.120 "trsvcid": "56066" 00:17:05.120 }, 00:17:05.120 "auth": { 00:17:05.120 "state": "completed", 00:17:05.120 "digest": "sha256", 00:17:05.120 "dhgroup": "ffdhe8192" 00:17:05.120 } 00:17:05.120 } 00:17:05.121 ]' 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.121 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.380 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.949 14:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.949 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.518 00:17:06.518 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.518 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.518 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.779 { 00:17:06.779 "cntlid": 47, 00:17:06.779 "qid": 0, 00:17:06.779 "state": "enabled", 00:17:06.779 "thread": "nvmf_tgt_poll_group_000", 00:17:06.779 "listen_address": { 00:17:06.779 "trtype": "TCP", 00:17:06.779 "adrfam": "IPv4", 00:17:06.779 "traddr": "10.0.0.2", 00:17:06.779 "trsvcid": "4420" 00:17:06.779 }, 00:17:06.779 "peer_address": { 00:17:06.779 "trtype": "TCP", 00:17:06.779 "adrfam": "IPv4", 00:17:06.779 "traddr": "10.0.0.1", 00:17:06.779 "trsvcid": "56080" 00:17:06.779 }, 00:17:06.779 "auth": { 00:17:06.779 "state": "completed", 00:17:06.779 "digest": "sha256", 00:17:06.779 "dhgroup": "ffdhe8192" 00:17:06.779 } 00:17:06.779 } 00:17:06.779 ]' 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.779 14:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.038 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.608 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.868 14:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.128 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.128 14:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.389 { 00:17:08.389 "cntlid": 49, 00:17:08.389 "qid": 0, 00:17:08.389 "state": "enabled", 00:17:08.389 "thread": "nvmf_tgt_poll_group_000", 00:17:08.389 "listen_address": { 00:17:08.389 "trtype": "TCP", 00:17:08.389 "adrfam": "IPv4", 00:17:08.389 "traddr": "10.0.0.2", 00:17:08.389 "trsvcid": "4420" 00:17:08.389 }, 00:17:08.389 "peer_address": { 00:17:08.389 "trtype": "TCP", 00:17:08.389 "adrfam": "IPv4", 00:17:08.389 "traddr": "10.0.0.1", 00:17:08.389 "trsvcid": "60402" 00:17:08.389 }, 00:17:08.389 "auth": { 00:17:08.389 "state": "completed", 00:17:08.389 "digest": "sha384", 00:17:08.389 "dhgroup": "null" 00:17:08.389 } 00:17:08.389 } 00:17:08.389 ]' 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.389 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.649 14:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.219 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.478 00:17:09.478 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.478 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.478 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.737 { 00:17:09.737 "cntlid": 51, 00:17:09.737 "qid": 0, 00:17:09.737 "state": "enabled", 00:17:09.737 "thread": "nvmf_tgt_poll_group_000", 00:17:09.737 "listen_address": { 00:17:09.737 "trtype": "TCP", 00:17:09.737 "adrfam": "IPv4", 00:17:09.737 "traddr": "10.0.0.2", 00:17:09.737 "trsvcid": "4420" 00:17:09.737 }, 00:17:09.737 "peer_address": { 00:17:09.737 "trtype": "TCP", 00:17:09.737 "adrfam": "IPv4", 00:17:09.737 "traddr": "10.0.0.1", 00:17:09.737 "trsvcid": "60420" 00:17:09.737 }, 00:17:09.737 "auth": { 00:17:09.737 "state": "completed", 00:17:09.737 "digest": "sha384", 00:17:09.737 "dhgroup": "null" 00:17:09.737 } 00:17:09.737 } 00:17:09.737 ]' 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.737 14:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.997 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.566 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.826 14:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.084 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.084 { 00:17:11.084 "cntlid": 53, 00:17:11.084 "qid": 0, 00:17:11.084 "state": "enabled", 00:17:11.084 "thread": "nvmf_tgt_poll_group_000", 00:17:11.084 "listen_address": { 00:17:11.084 "trtype": "TCP", 00:17:11.084 "adrfam": "IPv4", 00:17:11.084 "traddr": "10.0.0.2", 00:17:11.084 "trsvcid": "4420" 00:17:11.084 }, 00:17:11.084 "peer_address": { 00:17:11.084 "trtype": "TCP", 00:17:11.084 "adrfam": "IPv4", 00:17:11.084 "traddr": "10.0.0.1", 00:17:11.084 "trsvcid": "60450" 00:17:11.084 }, 00:17:11.084 "auth": { 00:17:11.084 "state": "completed", 00:17:11.084 "digest": "sha384", 00:17:11.084 "dhgroup": "null" 00:17:11.084 } 00:17:11.084 } 00:17:11.084 ]' 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.084 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.344 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:11.344 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.344 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.344 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.344 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.603 14:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.172 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.432 00:17:12.432 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.432 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.432 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.691 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.691 { 00:17:12.691 "cntlid": 55, 00:17:12.691 "qid": 0, 00:17:12.691 "state": "enabled", 00:17:12.691 "thread": "nvmf_tgt_poll_group_000", 00:17:12.691 "listen_address": { 00:17:12.691 "trtype": "TCP", 00:17:12.691 "adrfam": "IPv4", 00:17:12.691 "traddr": "10.0.0.2", 00:17:12.691 "trsvcid": "4420" 00:17:12.691 }, 00:17:12.691 "peer_address": { 00:17:12.691 "trtype": "TCP", 00:17:12.691 "adrfam": "IPv4", 00:17:12.691 "traddr": "10.0.0.1", 00:17:12.691 "trsvcid": "60478" 00:17:12.691 }, 00:17:12.691 "auth": { 00:17:12.691 "state": "completed", 00:17:12.691 "digest": "sha384", 00:17:12.691 "dhgroup": "null" 00:17:12.692 } 00:17:12.692 } 00:17:12.692 ]' 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.692 14:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.951 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.519 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.520 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.778 14:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.778 00:17:13.778 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.778 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.778 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.038 { 00:17:14.038 "cntlid": 57, 00:17:14.038 "qid": 0, 00:17:14.038 "state": "enabled", 00:17:14.038 "thread": "nvmf_tgt_poll_group_000", 00:17:14.038 "listen_address": { 00:17:14.038 "trtype": "TCP", 00:17:14.038 "adrfam": "IPv4", 00:17:14.038 "traddr": "10.0.0.2", 00:17:14.038 "trsvcid": "4420" 00:17:14.038 }, 00:17:14.038 "peer_address": { 00:17:14.038 "trtype": "TCP", 00:17:14.038 "adrfam": "IPv4", 00:17:14.038 "traddr": "10.0.0.1", 00:17:14.038 "trsvcid": "60508" 00:17:14.038 }, 00:17:14.038 "auth": { 00:17:14.038 "state": "completed", 00:17:14.038 "digest": "sha384", 00:17:14.038 "dhgroup": "ffdhe2048" 00:17:14.038 } 00:17:14.038 } 00:17:14.038 ]' 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.038 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.298 14:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.904 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.183 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.443 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.443 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.701 { 00:17:15.701 "cntlid": 59, 00:17:15.701 "qid": 0, 00:17:15.701 "state": "enabled", 00:17:15.701 "thread": "nvmf_tgt_poll_group_000", 00:17:15.701 "listen_address": { 00:17:15.701 "trtype": "TCP", 00:17:15.701 "adrfam": "IPv4", 00:17:15.701 "traddr": "10.0.0.2", 00:17:15.701 "trsvcid": "4420" 00:17:15.701 }, 00:17:15.701 "peer_address": { 00:17:15.701 "trtype": "TCP", 00:17:15.701 "adrfam": "IPv4", 00:17:15.701 "traddr": "10.0.0.1", 00:17:15.701 "trsvcid": "60526" 00:17:15.701 }, 00:17:15.701 "auth": { 00:17:15.701 "state": "completed", 00:17:15.701 "digest": "sha384", 00:17:15.701 "dhgroup": "ffdhe2048" 00:17:15.701 } 00:17:15.701 } 00:17:15.701 ]' 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.701 14:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.960 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.530 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.789 00:17:16.789 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.789 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.789 14:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.049 { 00:17:17.049 "cntlid": 61, 00:17:17.049 "qid": 0, 00:17:17.049 "state": "enabled", 00:17:17.049 "thread": "nvmf_tgt_poll_group_000", 00:17:17.049 "listen_address": { 00:17:17.049 "trtype": "TCP", 00:17:17.049 "adrfam": "IPv4", 00:17:17.049 "traddr": "10.0.0.2", 00:17:17.049 "trsvcid": "4420" 00:17:17.049 }, 00:17:17.049 "peer_address": { 00:17:17.049 "trtype": "TCP", 00:17:17.049 "adrfam": "IPv4", 00:17:17.049 "traddr": "10.0.0.1", 00:17:17.049 "trsvcid": "52112" 00:17:17.049 }, 00:17:17.049 "auth": { 00:17:17.049 "state": "completed", 00:17:17.049 "digest": "sha384", 00:17:17.049 "dhgroup": "ffdhe2048" 00:17:17.049 } 00:17:17.049 } 00:17:17.049 ]' 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.049 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.307 14:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.875 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.134 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.135 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.394 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.394 { 00:17:18.394 "cntlid": 63, 00:17:18.394 "qid": 0, 00:17:18.394 "state": "enabled", 00:17:18.394 "thread": "nvmf_tgt_poll_group_000", 00:17:18.394 "listen_address": { 00:17:18.394 "trtype": "TCP", 00:17:18.394 "adrfam": "IPv4", 00:17:18.394 "traddr": "10.0.0.2", 00:17:18.394 "trsvcid": "4420" 00:17:18.394 }, 00:17:18.394 "peer_address": { 00:17:18.394 "trtype": "TCP", 00:17:18.394 "adrfam": "IPv4", 00:17:18.394 "traddr": "10.0.0.1", 00:17:18.394 "trsvcid": "52146" 00:17:18.394 }, 00:17:18.394 "auth": { 00:17:18.394 "state": "completed", 00:17:18.394 "digest": "sha384", 00:17:18.394 "dhgroup": "ffdhe2048" 00:17:18.394 } 00:17:18.394 } 00:17:18.394 ]' 00:17:18.394 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.654 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.913 14:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.481 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.741 00:17:19.741 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.741 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.741 14:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.002 { 00:17:20.002 "cntlid": 65, 00:17:20.002 "qid": 0, 00:17:20.002 "state": "enabled", 00:17:20.002 "thread": "nvmf_tgt_poll_group_000", 00:17:20.002 "listen_address": { 00:17:20.002 "trtype": "TCP", 00:17:20.002 "adrfam": "IPv4", 00:17:20.002 "traddr": "10.0.0.2", 00:17:20.002 "trsvcid": "4420" 00:17:20.002 }, 00:17:20.002 "peer_address": { 00:17:20.002 "trtype": "TCP", 00:17:20.002 "adrfam": "IPv4", 00:17:20.002 "traddr": "10.0.0.1", 00:17:20.002 "trsvcid": "52182" 00:17:20.002 }, 00:17:20.002 "auth": { 00:17:20.002 "state": "completed", 00:17:20.002 "digest": "sha384", 00:17:20.002 "dhgroup": "ffdhe3072" 00:17:20.002 } 00:17:20.002 } 00:17:20.002 ]' 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.002 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.261 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.830 14:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.830 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.089 00:17:21.089 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.089 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.089 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.347 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.347 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.347 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.347 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.348 { 00:17:21.348 "cntlid": 67, 00:17:21.348 "qid": 0, 00:17:21.348 "state": "enabled", 00:17:21.348 "thread": "nvmf_tgt_poll_group_000", 00:17:21.348 "listen_address": { 00:17:21.348 "trtype": "TCP", 00:17:21.348 "adrfam": "IPv4", 00:17:21.348 "traddr": "10.0.0.2", 00:17:21.348 "trsvcid": "4420" 00:17:21.348 }, 00:17:21.348 "peer_address": { 00:17:21.348 "trtype": "TCP", 00:17:21.348 "adrfam": "IPv4", 00:17:21.348 "traddr": "10.0.0.1", 00:17:21.348 "trsvcid": "52214" 00:17:21.348 }, 00:17:21.348 "auth": { 00:17:21.348 "state": "completed", 00:17:21.348 "digest": "sha384", 00:17:21.348 "dhgroup": "ffdhe3072" 00:17:21.348 } 00:17:21.348 } 00:17:21.348 ]' 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.348 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.606 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.606 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.606 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.606 14:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.175 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.435 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.695 00:17:22.695 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.695 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.695 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.955 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.955 14:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.955 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.955 14:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.955 { 00:17:22.955 "cntlid": 69, 00:17:22.955 "qid": 0, 00:17:22.955 "state": "enabled", 00:17:22.955 "thread": "nvmf_tgt_poll_group_000", 00:17:22.955 "listen_address": { 00:17:22.955 "trtype": "TCP", 00:17:22.955 "adrfam": "IPv4", 00:17:22.955 "traddr": "10.0.0.2", 00:17:22.955 "trsvcid": "4420" 00:17:22.955 }, 00:17:22.955 "peer_address": { 00:17:22.955 "trtype": "TCP", 00:17:22.955 "adrfam": "IPv4", 00:17:22.955 "traddr": "10.0.0.1", 00:17:22.955 "trsvcid": "52248" 00:17:22.955 }, 00:17:22.955 "auth": { 00:17:22.955 "state": "completed", 00:17:22.955 "digest": "sha384", 00:17:22.955 "dhgroup": "ffdhe3072" 00:17:22.955 } 00:17:22.955 } 00:17:22.955 ]' 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.955 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.215 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.784 14:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.784 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.043 00:17:24.043 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.044 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.044 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.303 { 00:17:24.303 "cntlid": 71, 00:17:24.303 "qid": 0, 00:17:24.303 "state": "enabled", 00:17:24.303 "thread": "nvmf_tgt_poll_group_000", 00:17:24.303 "listen_address": { 00:17:24.303 "trtype": "TCP", 00:17:24.303 "adrfam": "IPv4", 00:17:24.303 "traddr": "10.0.0.2", 00:17:24.303 "trsvcid": "4420" 00:17:24.303 }, 00:17:24.303 "peer_address": { 00:17:24.303 "trtype": "TCP", 00:17:24.303 "adrfam": "IPv4", 00:17:24.303 "traddr": "10.0.0.1", 00:17:24.303 "trsvcid": "52282" 00:17:24.303 }, 00:17:24.303 "auth": { 00:17:24.303 "state": "completed", 00:17:24.303 "digest": "sha384", 00:17:24.303 "dhgroup": "ffdhe3072" 00:17:24.303 } 00:17:24.303 } 00:17:24.303 ]' 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.303 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.563 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.563 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.563 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.563 14:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.132 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.392 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.652 00:17:25.652 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.652 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.652 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.912 { 00:17:25.912 "cntlid": 73, 00:17:25.912 "qid": 0, 00:17:25.912 "state": "enabled", 00:17:25.912 "thread": "nvmf_tgt_poll_group_000", 00:17:25.912 "listen_address": { 00:17:25.912 "trtype": "TCP", 00:17:25.912 "adrfam": "IPv4", 00:17:25.912 "traddr": "10.0.0.2", 00:17:25.912 "trsvcid": "4420" 00:17:25.912 }, 00:17:25.912 "peer_address": { 00:17:25.912 "trtype": "TCP", 00:17:25.912 "adrfam": "IPv4", 00:17:25.912 "traddr": "10.0.0.1", 00:17:25.912 "trsvcid": "52306" 00:17:25.912 }, 00:17:25.912 "auth": { 00:17:25.912 "state": "completed", 00:17:25.912 "digest": "sha384", 00:17:25.912 "dhgroup": "ffdhe4096" 00:17:25.912 } 00:17:25.912 } 00:17:25.912 ]' 00:17:25.912 14:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.912 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.172 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:26.740 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.741 14:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.000 00:17:27.000 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.000 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.000 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.260 { 00:17:27.260 "cntlid": 75, 00:17:27.260 "qid": 0, 00:17:27.260 "state": "enabled", 00:17:27.260 "thread": "nvmf_tgt_poll_group_000", 00:17:27.260 "listen_address": { 00:17:27.260 "trtype": "TCP", 00:17:27.260 "adrfam": "IPv4", 00:17:27.260 "traddr": "10.0.0.2", 00:17:27.260 "trsvcid": "4420" 00:17:27.260 }, 00:17:27.260 "peer_address": { 00:17:27.260 "trtype": "TCP", 00:17:27.260 "adrfam": "IPv4", 00:17:27.260 "traddr": "10.0.0.1", 00:17:27.260 "trsvcid": "41248" 00:17:27.260 }, 00:17:27.260 "auth": { 00:17:27.260 "state": "completed", 00:17:27.260 "digest": "sha384", 00:17:27.260 "dhgroup": "ffdhe4096" 00:17:27.260 } 00:17:27.260 } 00:17:27.260 ]' 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.260 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.520 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.520 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.520 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.520 14:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.090 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.350 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.351 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.611 00:17:28.611 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.611 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.611 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.871 { 00:17:28.871 "cntlid": 77, 00:17:28.871 "qid": 0, 00:17:28.871 "state": "enabled", 00:17:28.871 "thread": "nvmf_tgt_poll_group_000", 00:17:28.871 "listen_address": { 00:17:28.871 "trtype": "TCP", 00:17:28.871 "adrfam": "IPv4", 00:17:28.871 "traddr": "10.0.0.2", 00:17:28.871 "trsvcid": "4420" 00:17:28.871 }, 00:17:28.871 "peer_address": { 00:17:28.871 "trtype": "TCP", 00:17:28.871 "adrfam": "IPv4", 00:17:28.871 "traddr": "10.0.0.1", 00:17:28.871 "trsvcid": "41274" 00:17:28.871 }, 00:17:28.871 "auth": { 00:17:28.871 "state": "completed", 00:17:28.871 "digest": "sha384", 00:17:28.871 "dhgroup": "ffdhe4096" 00:17:28.871 } 00:17:28.871 } 00:17:28.871 ]' 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.871 14:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.871 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.871 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.871 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.871 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.871 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.174 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.752 14:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.013 00:17:30.013 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.013 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.013 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.272 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.273 { 00:17:30.273 "cntlid": 79, 00:17:30.273 "qid": 0, 00:17:30.273 "state": "enabled", 00:17:30.273 "thread": "nvmf_tgt_poll_group_000", 00:17:30.273 "listen_address": { 00:17:30.273 "trtype": "TCP", 00:17:30.273 "adrfam": "IPv4", 00:17:30.273 "traddr": "10.0.0.2", 00:17:30.273 "trsvcid": "4420" 00:17:30.273 }, 00:17:30.273 "peer_address": { 00:17:30.273 "trtype": "TCP", 00:17:30.273 "adrfam": "IPv4", 00:17:30.273 "traddr": "10.0.0.1", 00:17:30.273 "trsvcid": "41310" 00:17:30.273 }, 00:17:30.273 "auth": { 00:17:30.273 "state": "completed", 00:17:30.273 "digest": "sha384", 00:17:30.273 "dhgroup": "ffdhe4096" 00:17:30.273 } 00:17:30.273 } 00:17:30.273 ]' 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.273 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.532 14:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.101 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.361 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.620 00:17:31.620 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.620 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.620 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.880 { 00:17:31.880 "cntlid": 81, 00:17:31.880 "qid": 0, 00:17:31.880 "state": "enabled", 00:17:31.880 "thread": "nvmf_tgt_poll_group_000", 00:17:31.880 "listen_address": { 00:17:31.880 "trtype": "TCP", 00:17:31.880 "adrfam": "IPv4", 00:17:31.880 "traddr": "10.0.0.2", 00:17:31.880 "trsvcid": "4420" 00:17:31.880 }, 00:17:31.880 "peer_address": { 00:17:31.880 "trtype": "TCP", 00:17:31.880 "adrfam": "IPv4", 00:17:31.880 "traddr": "10.0.0.1", 00:17:31.880 "trsvcid": "41342" 00:17:31.880 }, 00:17:31.880 "auth": { 00:17:31.880 "state": "completed", 00:17:31.880 "digest": "sha384", 00:17:31.880 "dhgroup": "ffdhe6144" 00:17:31.880 } 00:17:31.880 } 00:17:31.880 ]' 00:17:31.880 14:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.880 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.139 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.708 14:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.967 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.968 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.229 00:17:33.229 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.229 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.229 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.487 { 00:17:33.487 "cntlid": 83, 00:17:33.487 "qid": 0, 00:17:33.487 "state": "enabled", 00:17:33.487 "thread": "nvmf_tgt_poll_group_000", 00:17:33.487 "listen_address": { 00:17:33.487 "trtype": "TCP", 00:17:33.487 "adrfam": "IPv4", 00:17:33.487 "traddr": "10.0.0.2", 00:17:33.487 "trsvcid": "4420" 00:17:33.487 }, 00:17:33.487 "peer_address": { 00:17:33.487 "trtype": "TCP", 00:17:33.487 "adrfam": "IPv4", 00:17:33.487 "traddr": "10.0.0.1", 00:17:33.487 "trsvcid": "41368" 00:17:33.487 }, 00:17:33.487 "auth": { 00:17:33.487 "state": "completed", 00:17:33.487 "digest": "sha384", 00:17:33.487 "dhgroup": "ffdhe6144" 00:17:33.487 } 00:17:33.487 } 00:17:33.487 ]' 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.487 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.747 14:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:34.316 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.317 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.886 00:17:34.886 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.886 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.886 14:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.886 { 00:17:34.886 "cntlid": 85, 00:17:34.886 "qid": 0, 00:17:34.886 "state": "enabled", 00:17:34.886 "thread": "nvmf_tgt_poll_group_000", 00:17:34.886 "listen_address": { 00:17:34.886 "trtype": "TCP", 00:17:34.886 "adrfam": "IPv4", 00:17:34.886 "traddr": "10.0.0.2", 00:17:34.886 "trsvcid": "4420" 00:17:34.886 }, 00:17:34.886 "peer_address": { 00:17:34.886 "trtype": "TCP", 00:17:34.886 "adrfam": "IPv4", 00:17:34.886 "traddr": "10.0.0.1", 00:17:34.886 "trsvcid": "41398" 00:17:34.886 }, 00:17:34.886 "auth": { 00:17:34.886 "state": "completed", 00:17:34.886 "digest": "sha384", 00:17:34.886 "dhgroup": "ffdhe6144" 00:17:34.886 } 00:17:34.886 } 00:17:34.886 ]' 00:17:34.886 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.887 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.887 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.146 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.715 14:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.976 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.235 00:17:36.235 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.235 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.235 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.495 { 00:17:36.495 "cntlid": 87, 00:17:36.495 "qid": 0, 00:17:36.495 "state": "enabled", 00:17:36.495 "thread": "nvmf_tgt_poll_group_000", 00:17:36.495 "listen_address": { 00:17:36.495 "trtype": "TCP", 00:17:36.495 "adrfam": "IPv4", 00:17:36.495 "traddr": "10.0.0.2", 00:17:36.495 "trsvcid": "4420" 00:17:36.495 }, 00:17:36.495 "peer_address": { 00:17:36.495 "trtype": "TCP", 00:17:36.495 "adrfam": "IPv4", 00:17:36.495 "traddr": "10.0.0.1", 00:17:36.495 "trsvcid": "41428" 00:17:36.495 }, 00:17:36.495 "auth": { 00:17:36.495 "state": "completed", 00:17:36.495 "digest": "sha384", 00:17:36.495 "dhgroup": "ffdhe6144" 00:17:36.495 } 00:17:36.495 } 00:17:36.495 ]' 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.495 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.754 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.754 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.754 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.754 14:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.322 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.582 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.583 14:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.152 00:17:38.152 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.153 { 00:17:38.153 "cntlid": 89, 00:17:38.153 "qid": 0, 00:17:38.153 "state": "enabled", 00:17:38.153 "thread": "nvmf_tgt_poll_group_000", 00:17:38.153 "listen_address": { 00:17:38.153 "trtype": "TCP", 00:17:38.153 "adrfam": "IPv4", 00:17:38.153 "traddr": "10.0.0.2", 00:17:38.153 "trsvcid": "4420" 00:17:38.153 }, 00:17:38.153 "peer_address": { 00:17:38.153 "trtype": "TCP", 00:17:38.153 "adrfam": "IPv4", 00:17:38.153 "traddr": "10.0.0.1", 00:17:38.153 "trsvcid": "58034" 00:17:38.153 }, 00:17:38.153 "auth": { 00:17:38.153 "state": "completed", 00:17:38.153 "digest": "sha384", 00:17:38.153 "dhgroup": "ffdhe8192" 00:17:38.153 } 00:17:38.153 } 00:17:38.153 ]' 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.153 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.412 14:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.980 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.239 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.807 00:17:39.807 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.807 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.807 14:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.807 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.807 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.807 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.807 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.067 { 00:17:40.067 "cntlid": 91, 00:17:40.067 "qid": 0, 00:17:40.067 "state": "enabled", 00:17:40.067 "thread": "nvmf_tgt_poll_group_000", 00:17:40.067 "listen_address": { 00:17:40.067 "trtype": "TCP", 00:17:40.067 "adrfam": "IPv4", 00:17:40.067 "traddr": "10.0.0.2", 00:17:40.067 "trsvcid": "4420" 00:17:40.067 }, 00:17:40.067 "peer_address": { 00:17:40.067 "trtype": "TCP", 00:17:40.067 "adrfam": "IPv4", 00:17:40.067 "traddr": "10.0.0.1", 00:17:40.067 "trsvcid": "58076" 00:17:40.067 }, 00:17:40.067 "auth": { 00:17:40.067 "state": "completed", 00:17:40.067 "digest": "sha384", 00:17:40.067 "dhgroup": "ffdhe8192" 00:17:40.067 } 00:17:40.067 } 00:17:40.067 ]' 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.067 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.326 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.896 14:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.896 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:40.896 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.896 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.897 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.467 00:17:41.467 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.467 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.467 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.727 { 00:17:41.727 "cntlid": 93, 00:17:41.727 "qid": 0, 00:17:41.727 "state": "enabled", 00:17:41.727 "thread": "nvmf_tgt_poll_group_000", 00:17:41.727 "listen_address": { 00:17:41.727 "trtype": "TCP", 00:17:41.727 "adrfam": "IPv4", 00:17:41.727 "traddr": "10.0.0.2", 00:17:41.727 "trsvcid": "4420" 00:17:41.727 }, 00:17:41.727 "peer_address": { 00:17:41.727 "trtype": "TCP", 00:17:41.727 "adrfam": "IPv4", 00:17:41.727 "traddr": "10.0.0.1", 00:17:41.727 "trsvcid": "58094" 00:17:41.727 }, 00:17:41.727 "auth": { 00:17:41.727 "state": "completed", 00:17:41.727 "digest": "sha384", 00:17:41.727 "dhgroup": "ffdhe8192" 00:17:41.727 } 00:17:41.727 } 00:17:41.727 ]' 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.727 14:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.987 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.558 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.817 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:42.817 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.818 14:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.078 00:17:43.078 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.078 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.078 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.337 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.337 { 00:17:43.337 "cntlid": 95, 00:17:43.337 "qid": 0, 00:17:43.337 "state": "enabled", 00:17:43.337 "thread": "nvmf_tgt_poll_group_000", 00:17:43.337 "listen_address": { 00:17:43.338 "trtype": "TCP", 00:17:43.338 "adrfam": "IPv4", 00:17:43.338 "traddr": "10.0.0.2", 00:17:43.338 "trsvcid": "4420" 00:17:43.338 }, 00:17:43.338 "peer_address": { 00:17:43.338 "trtype": "TCP", 00:17:43.338 "adrfam": "IPv4", 00:17:43.338 "traddr": "10.0.0.1", 00:17:43.338 "trsvcid": "58134" 00:17:43.338 }, 00:17:43.338 "auth": { 00:17:43.338 "state": "completed", 00:17:43.338 "digest": "sha384", 00:17:43.338 "dhgroup": "ffdhe8192" 00:17:43.338 } 00:17:43.338 } 00:17:43.338 ]' 00:17:43.338 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.338 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.338 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.635 14:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:44.204 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.205 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.464 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.465 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.724 00:17:44.724 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.724 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.724 14:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.724 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.724 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.724 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.724 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.985 { 00:17:44.985 "cntlid": 97, 00:17:44.985 "qid": 0, 00:17:44.985 "state": "enabled", 00:17:44.985 "thread": "nvmf_tgt_poll_group_000", 00:17:44.985 "listen_address": { 00:17:44.985 "trtype": "TCP", 00:17:44.985 "adrfam": "IPv4", 00:17:44.985 "traddr": "10.0.0.2", 00:17:44.985 "trsvcid": "4420" 00:17:44.985 }, 00:17:44.985 "peer_address": { 00:17:44.985 "trtype": "TCP", 00:17:44.985 "adrfam": "IPv4", 00:17:44.985 "traddr": "10.0.0.1", 00:17:44.985 "trsvcid": "58174" 00:17:44.985 }, 00:17:44.985 "auth": { 00:17:44.985 "state": "completed", 00:17:44.985 "digest": "sha512", 00:17:44.985 "dhgroup": "null" 00:17:44.985 } 00:17:44.985 } 00:17:44.985 ]' 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.985 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.245 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.815 14:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.815 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.073 00:17:46.073 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.073 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.073 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.332 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.332 { 00:17:46.332 "cntlid": 99, 00:17:46.332 "qid": 0, 00:17:46.332 "state": "enabled", 00:17:46.332 "thread": "nvmf_tgt_poll_group_000", 00:17:46.332 "listen_address": { 00:17:46.332 "trtype": "TCP", 00:17:46.332 "adrfam": "IPv4", 00:17:46.332 "traddr": "10.0.0.2", 00:17:46.332 "trsvcid": "4420" 00:17:46.333 }, 00:17:46.333 "peer_address": { 00:17:46.333 "trtype": "TCP", 00:17:46.333 "adrfam": "IPv4", 00:17:46.333 "traddr": "10.0.0.1", 00:17:46.333 "trsvcid": "58192" 00:17:46.333 }, 00:17:46.333 "auth": { 00:17:46.333 "state": "completed", 00:17:46.333 "digest": "sha512", 00:17:46.333 "dhgroup": "null" 00:17:46.333 } 00:17:46.333 } 00:17:46.333 ]' 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.333 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.593 14:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.162 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.421 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.680 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.680 { 00:17:47.680 "cntlid": 101, 00:17:47.680 "qid": 0, 00:17:47.680 "state": "enabled", 00:17:47.680 "thread": "nvmf_tgt_poll_group_000", 00:17:47.680 "listen_address": { 00:17:47.680 "trtype": "TCP", 00:17:47.680 "adrfam": "IPv4", 00:17:47.680 "traddr": "10.0.0.2", 00:17:47.680 "trsvcid": "4420" 00:17:47.680 }, 00:17:47.680 "peer_address": { 00:17:47.680 "trtype": "TCP", 00:17:47.680 "adrfam": "IPv4", 00:17:47.680 "traddr": "10.0.0.1", 00:17:47.680 "trsvcid": "35234" 00:17:47.680 }, 00:17:47.680 "auth": { 00:17:47.680 "state": "completed", 00:17:47.680 "digest": "sha512", 00:17:47.680 "dhgroup": "null" 00:17:47.680 } 00:17:47.680 } 00:17:47.680 ]' 00:17:47.680 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.940 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.940 14:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.940 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.508 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.768 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.769 14:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.028 00:17:49.028 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.028 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.028 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.288 { 00:17:49.288 "cntlid": 103, 00:17:49.288 "qid": 0, 00:17:49.288 "state": "enabled", 00:17:49.288 "thread": "nvmf_tgt_poll_group_000", 00:17:49.288 "listen_address": { 00:17:49.288 "trtype": "TCP", 00:17:49.288 "adrfam": "IPv4", 00:17:49.288 "traddr": "10.0.0.2", 00:17:49.288 "trsvcid": "4420" 00:17:49.288 }, 00:17:49.288 "peer_address": { 00:17:49.288 "trtype": "TCP", 00:17:49.288 "adrfam": "IPv4", 00:17:49.288 "traddr": "10.0.0.1", 00:17:49.288 "trsvcid": "35264" 00:17:49.288 }, 00:17:49.288 "auth": { 00:17:49.288 "state": "completed", 00:17:49.288 "digest": "sha512", 00:17:49.288 "dhgroup": "null" 00:17:49.288 } 00:17:49.288 } 00:17:49.288 ]' 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.288 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.548 14:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.117 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.378 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.378 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.638 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.639 14:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.639 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.639 { 00:17:50.639 "cntlid": 105, 00:17:50.639 "qid": 0, 00:17:50.639 "state": "enabled", 00:17:50.639 "thread": "nvmf_tgt_poll_group_000", 00:17:50.639 "listen_address": { 00:17:50.639 "trtype": "TCP", 00:17:50.639 "adrfam": "IPv4", 00:17:50.639 "traddr": "10.0.0.2", 00:17:50.639 "trsvcid": "4420" 00:17:50.639 }, 00:17:50.639 "peer_address": { 00:17:50.639 "trtype": "TCP", 00:17:50.639 "adrfam": "IPv4", 00:17:50.639 "traddr": "10.0.0.1", 00:17:50.639 "trsvcid": "35298" 00:17:50.639 }, 00:17:50.639 "auth": { 00:17:50.639 "state": "completed", 00:17:50.639 "digest": "sha512", 00:17:50.639 "dhgroup": "ffdhe2048" 00:17:50.639 } 00:17:50.639 } 00:17:50.639 ]' 00:17:50.639 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.639 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.639 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.898 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.898 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.898 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.898 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.898 14:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.898 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.465 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.724 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.725 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.725 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.725 14:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.725 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.725 14:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.984 00:17:51.984 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.984 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.984 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.244 { 00:17:52.244 "cntlid": 107, 00:17:52.244 "qid": 0, 00:17:52.244 "state": "enabled", 00:17:52.244 "thread": "nvmf_tgt_poll_group_000", 00:17:52.244 "listen_address": { 00:17:52.244 "trtype": "TCP", 00:17:52.244 "adrfam": "IPv4", 00:17:52.244 "traddr": "10.0.0.2", 00:17:52.244 "trsvcid": "4420" 00:17:52.244 }, 00:17:52.244 "peer_address": { 00:17:52.244 "trtype": "TCP", 00:17:52.244 "adrfam": "IPv4", 00:17:52.244 "traddr": "10.0.0.1", 00:17:52.244 "trsvcid": "35322" 00:17:52.244 }, 00:17:52.244 "auth": { 00:17:52.244 "state": "completed", 00:17:52.244 "digest": "sha512", 00:17:52.244 "dhgroup": "ffdhe2048" 00:17:52.244 } 00:17:52.244 } 00:17:52.244 ]' 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.244 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.504 14:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.073 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.332 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.332 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.592 { 00:17:53.592 "cntlid": 109, 00:17:53.592 "qid": 0, 00:17:53.592 "state": "enabled", 00:17:53.592 "thread": "nvmf_tgt_poll_group_000", 00:17:53.592 "listen_address": { 00:17:53.592 "trtype": "TCP", 00:17:53.592 "adrfam": "IPv4", 00:17:53.592 "traddr": "10.0.0.2", 00:17:53.592 "trsvcid": "4420" 00:17:53.592 }, 00:17:53.592 "peer_address": { 00:17:53.592 "trtype": "TCP", 00:17:53.592 "adrfam": "IPv4", 00:17:53.592 "traddr": "10.0.0.1", 00:17:53.592 "trsvcid": "35338" 00:17:53.592 }, 00:17:53.592 "auth": { 00:17:53.592 "state": "completed", 00:17:53.592 "digest": "sha512", 00:17:53.592 "dhgroup": "ffdhe2048" 00:17:53.592 } 00:17:53.592 } 00:17:53.592 ]' 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.592 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.851 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.851 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.851 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.851 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.851 14:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.851 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.421 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.680 14:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.939 00:17:54.939 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.939 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.939 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.199 { 00:17:55.199 "cntlid": 111, 00:17:55.199 "qid": 0, 00:17:55.199 "state": "enabled", 00:17:55.199 "thread": "nvmf_tgt_poll_group_000", 00:17:55.199 "listen_address": { 00:17:55.199 "trtype": "TCP", 00:17:55.199 "adrfam": "IPv4", 00:17:55.199 "traddr": "10.0.0.2", 00:17:55.199 "trsvcid": "4420" 00:17:55.199 }, 00:17:55.199 "peer_address": { 00:17:55.199 "trtype": "TCP", 00:17:55.199 "adrfam": "IPv4", 00:17:55.199 "traddr": "10.0.0.1", 00:17:55.199 "trsvcid": "35368" 00:17:55.199 }, 00:17:55.199 "auth": { 00:17:55.199 "state": "completed", 00:17:55.199 "digest": "sha512", 00:17:55.199 "dhgroup": "ffdhe2048" 00:17:55.199 } 00:17:55.199 } 00:17:55.199 ]' 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.199 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.458 14:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.028 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.288 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.288 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.548 { 00:17:56.548 "cntlid": 113, 00:17:56.548 "qid": 0, 00:17:56.548 "state": "enabled", 00:17:56.548 "thread": "nvmf_tgt_poll_group_000", 00:17:56.548 "listen_address": { 00:17:56.548 "trtype": "TCP", 00:17:56.548 "adrfam": "IPv4", 00:17:56.548 "traddr": "10.0.0.2", 00:17:56.548 "trsvcid": "4420" 00:17:56.548 }, 00:17:56.548 "peer_address": { 00:17:56.548 "trtype": "TCP", 00:17:56.548 "adrfam": "IPv4", 00:17:56.548 "traddr": "10.0.0.1", 00:17:56.548 "trsvcid": "33284" 00:17:56.548 }, 00:17:56.548 "auth": { 00:17:56.548 "state": "completed", 00:17:56.548 "digest": "sha512", 00:17:56.548 "dhgroup": "ffdhe3072" 00:17:56.548 } 00:17:56.548 } 00:17:56.548 ]' 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.548 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.807 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.807 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.807 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.807 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.807 14:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.808 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.377 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.637 14:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.930 00:17:57.930 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.930 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.930 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.190 { 00:17:58.190 "cntlid": 115, 00:17:58.190 "qid": 0, 00:17:58.190 "state": "enabled", 00:17:58.190 "thread": "nvmf_tgt_poll_group_000", 00:17:58.190 "listen_address": { 00:17:58.190 "trtype": "TCP", 00:17:58.190 "adrfam": "IPv4", 00:17:58.190 "traddr": "10.0.0.2", 00:17:58.190 "trsvcid": "4420" 00:17:58.190 }, 00:17:58.190 "peer_address": { 00:17:58.190 "trtype": "TCP", 00:17:58.190 "adrfam": "IPv4", 00:17:58.190 "traddr": "10.0.0.1", 00:17:58.190 "trsvcid": "33308" 00:17:58.190 }, 00:17:58.190 "auth": { 00:17:58.190 "state": "completed", 00:17:58.190 "digest": "sha512", 00:17:58.190 "dhgroup": "ffdhe3072" 00:17:58.190 } 00:17:58.190 } 00:17:58.190 ]' 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.190 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.191 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.191 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.191 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.450 14:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.021 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.281 00:17:59.281 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.281 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.281 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.541 { 00:17:59.541 "cntlid": 117, 00:17:59.541 "qid": 0, 00:17:59.541 "state": "enabled", 00:17:59.541 "thread": "nvmf_tgt_poll_group_000", 00:17:59.541 "listen_address": { 00:17:59.541 "trtype": "TCP", 00:17:59.541 "adrfam": "IPv4", 00:17:59.541 "traddr": "10.0.0.2", 00:17:59.541 "trsvcid": "4420" 00:17:59.541 }, 00:17:59.541 "peer_address": { 00:17:59.541 "trtype": "TCP", 00:17:59.541 "adrfam": "IPv4", 00:17:59.541 "traddr": "10.0.0.1", 00:17:59.541 "trsvcid": "33318" 00:17:59.541 }, 00:17:59.541 "auth": { 00:17:59.541 "state": "completed", 00:17:59.541 "digest": "sha512", 00:17:59.541 "dhgroup": "ffdhe3072" 00:17:59.541 } 00:17:59.541 } 00:17:59.541 ]' 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.541 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.801 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.801 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.801 14:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.801 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.370 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.371 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.631 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.891 00:18:00.891 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.891 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.891 14:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.891 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.891 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.891 14:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.891 14:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.150 { 00:18:01.150 "cntlid": 119, 00:18:01.150 "qid": 0, 00:18:01.150 "state": "enabled", 00:18:01.150 "thread": "nvmf_tgt_poll_group_000", 00:18:01.150 "listen_address": { 00:18:01.150 "trtype": "TCP", 00:18:01.150 "adrfam": "IPv4", 00:18:01.150 "traddr": "10.0.0.2", 00:18:01.150 "trsvcid": "4420" 00:18:01.150 }, 00:18:01.150 "peer_address": { 00:18:01.150 "trtype": "TCP", 00:18:01.150 "adrfam": "IPv4", 00:18:01.150 "traddr": "10.0.0.1", 00:18:01.150 "trsvcid": "33340" 00:18:01.150 }, 00:18:01.150 "auth": { 00:18:01.150 "state": "completed", 00:18:01.150 "digest": "sha512", 00:18:01.150 "dhgroup": "ffdhe3072" 00:18:01.150 } 00:18:01.150 } 00:18:01.150 ]' 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.150 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.410 14:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.980 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.239 00:18:02.239 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.239 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.239 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.499 { 00:18:02.499 "cntlid": 121, 00:18:02.499 "qid": 0, 00:18:02.499 "state": "enabled", 00:18:02.499 "thread": "nvmf_tgt_poll_group_000", 00:18:02.499 "listen_address": { 00:18:02.499 "trtype": "TCP", 00:18:02.499 "adrfam": "IPv4", 00:18:02.499 "traddr": "10.0.0.2", 00:18:02.499 "trsvcid": "4420" 00:18:02.499 }, 00:18:02.499 "peer_address": { 00:18:02.499 "trtype": "TCP", 00:18:02.499 "adrfam": "IPv4", 00:18:02.499 "traddr": "10.0.0.1", 00:18:02.499 "trsvcid": "33366" 00:18:02.499 }, 00:18:02.499 "auth": { 00:18:02.499 "state": "completed", 00:18:02.499 "digest": "sha512", 00:18:02.499 "dhgroup": "ffdhe4096" 00:18:02.499 } 00:18:02.499 } 00:18:02.499 ]' 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.499 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.757 14:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.325 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.584 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.844 00:18:03.844 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.844 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.844 14:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.104 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.104 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.105 { 00:18:04.105 "cntlid": 123, 00:18:04.105 "qid": 0, 00:18:04.105 "state": "enabled", 00:18:04.105 "thread": "nvmf_tgt_poll_group_000", 00:18:04.105 "listen_address": { 00:18:04.105 "trtype": "TCP", 00:18:04.105 "adrfam": "IPv4", 00:18:04.105 "traddr": "10.0.0.2", 00:18:04.105 "trsvcid": "4420" 00:18:04.105 }, 00:18:04.105 "peer_address": { 00:18:04.105 "trtype": "TCP", 00:18:04.105 "adrfam": "IPv4", 00:18:04.105 "traddr": "10.0.0.1", 00:18:04.105 "trsvcid": "33394" 00:18:04.105 }, 00:18:04.105 "auth": { 00:18:04.105 "state": "completed", 00:18:04.105 "digest": "sha512", 00:18:04.105 "dhgroup": "ffdhe4096" 00:18:04.105 } 00:18:04.105 } 00:18:04.105 ]' 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.105 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.365 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.936 14:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.936 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.195 00:18:05.195 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.195 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.195 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.455 { 00:18:05.455 "cntlid": 125, 00:18:05.455 "qid": 0, 00:18:05.455 "state": "enabled", 00:18:05.455 "thread": "nvmf_tgt_poll_group_000", 00:18:05.455 "listen_address": { 00:18:05.455 "trtype": "TCP", 00:18:05.455 "adrfam": "IPv4", 00:18:05.455 "traddr": "10.0.0.2", 00:18:05.455 "trsvcid": "4420" 00:18:05.455 }, 00:18:05.455 "peer_address": { 00:18:05.455 "trtype": "TCP", 00:18:05.455 "adrfam": "IPv4", 00:18:05.455 "traddr": "10.0.0.1", 00:18:05.455 "trsvcid": "33438" 00:18:05.455 }, 00:18:05.455 "auth": { 00:18:05.455 "state": "completed", 00:18:05.455 "digest": "sha512", 00:18:05.455 "dhgroup": "ffdhe4096" 00:18:05.455 } 00:18:05.455 } 00:18:05.455 ]' 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.455 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.456 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.456 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.715 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.715 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.715 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.715 14:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.285 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.546 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.806 00:18:06.806 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.806 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.806 14:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.066 { 00:18:07.066 "cntlid": 127, 00:18:07.066 "qid": 0, 00:18:07.066 "state": "enabled", 00:18:07.066 "thread": "nvmf_tgt_poll_group_000", 00:18:07.066 "listen_address": { 00:18:07.066 "trtype": "TCP", 00:18:07.066 "adrfam": "IPv4", 00:18:07.066 "traddr": "10.0.0.2", 00:18:07.066 "trsvcid": "4420" 00:18:07.066 }, 00:18:07.066 "peer_address": { 00:18:07.066 "trtype": "TCP", 00:18:07.066 "adrfam": "IPv4", 00:18:07.066 "traddr": "10.0.0.1", 00:18:07.066 "trsvcid": "36564" 00:18:07.066 }, 00:18:07.066 "auth": { 00:18:07.066 "state": "completed", 00:18:07.066 "digest": "sha512", 00:18:07.066 "dhgroup": "ffdhe4096" 00:18:07.066 } 00:18:07.066 } 00:18:07.066 ]' 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.066 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.326 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.893 14:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.893 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.462 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.462 { 00:18:08.462 "cntlid": 129, 00:18:08.462 "qid": 0, 00:18:08.462 "state": "enabled", 00:18:08.462 "thread": "nvmf_tgt_poll_group_000", 00:18:08.462 "listen_address": { 00:18:08.462 "trtype": "TCP", 00:18:08.462 "adrfam": "IPv4", 00:18:08.462 "traddr": "10.0.0.2", 00:18:08.462 "trsvcid": "4420" 00:18:08.462 }, 00:18:08.462 "peer_address": { 00:18:08.462 "trtype": "TCP", 00:18:08.462 "adrfam": "IPv4", 00:18:08.462 "traddr": "10.0.0.1", 00:18:08.462 "trsvcid": "36610" 00:18:08.462 }, 00:18:08.462 "auth": { 00:18:08.462 "state": "completed", 00:18:08.462 "digest": "sha512", 00:18:08.462 "dhgroup": "ffdhe6144" 00:18:08.462 } 00:18:08.462 } 00:18:08.462 ]' 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.462 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.721 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.721 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.721 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.721 14:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.290 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.549 14:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.808 00:18:09.808 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.808 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.808 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.067 { 00:18:10.067 "cntlid": 131, 00:18:10.067 "qid": 0, 00:18:10.067 "state": "enabled", 00:18:10.067 "thread": "nvmf_tgt_poll_group_000", 00:18:10.067 "listen_address": { 00:18:10.067 "trtype": "TCP", 00:18:10.067 "adrfam": "IPv4", 00:18:10.067 "traddr": "10.0.0.2", 00:18:10.067 "trsvcid": "4420" 00:18:10.067 }, 00:18:10.067 "peer_address": { 00:18:10.067 "trtype": "TCP", 00:18:10.067 "adrfam": "IPv4", 00:18:10.067 "traddr": "10.0.0.1", 00:18:10.067 "trsvcid": "36638" 00:18:10.067 }, 00:18:10.067 "auth": { 00:18:10.067 "state": "completed", 00:18:10.067 "digest": "sha512", 00:18:10.067 "dhgroup": "ffdhe6144" 00:18:10.067 } 00:18:10.067 } 00:18:10.067 ]' 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.067 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.326 14:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.895 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.155 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.415 00:18:11.415 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.415 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.415 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.674 { 00:18:11.674 "cntlid": 133, 00:18:11.674 "qid": 0, 00:18:11.674 "state": "enabled", 00:18:11.674 "thread": "nvmf_tgt_poll_group_000", 00:18:11.674 "listen_address": { 00:18:11.674 "trtype": "TCP", 00:18:11.674 "adrfam": "IPv4", 00:18:11.674 "traddr": "10.0.0.2", 00:18:11.674 "trsvcid": "4420" 00:18:11.674 }, 00:18:11.674 "peer_address": { 00:18:11.674 "trtype": "TCP", 00:18:11.674 "adrfam": "IPv4", 00:18:11.674 "traddr": "10.0.0.1", 00:18:11.674 "trsvcid": "36670" 00:18:11.674 }, 00:18:11.674 "auth": { 00:18:11.674 "state": "completed", 00:18:11.674 "digest": "sha512", 00:18:11.674 "dhgroup": "ffdhe6144" 00:18:11.674 } 00:18:11.674 } 00:18:11.674 ]' 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.674 14:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.934 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.532 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.533 14:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.102 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.102 { 00:18:13.102 "cntlid": 135, 00:18:13.102 "qid": 0, 00:18:13.102 "state": "enabled", 00:18:13.102 "thread": "nvmf_tgt_poll_group_000", 00:18:13.102 "listen_address": { 00:18:13.102 "trtype": "TCP", 00:18:13.102 "adrfam": "IPv4", 00:18:13.102 "traddr": "10.0.0.2", 00:18:13.102 "trsvcid": "4420" 00:18:13.102 }, 00:18:13.102 "peer_address": { 00:18:13.102 "trtype": "TCP", 00:18:13.102 "adrfam": "IPv4", 00:18:13.102 "traddr": "10.0.0.1", 00:18:13.102 "trsvcid": "36686" 00:18:13.102 }, 00:18:13.102 "auth": { 00:18:13.102 "state": "completed", 00:18:13.102 "digest": "sha512", 00:18:13.102 "dhgroup": "ffdhe6144" 00:18:13.102 } 00:18:13.102 } 00:18:13.102 ]' 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.102 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.362 14:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.931 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.191 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.760 00:18:14.760 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.760 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.760 14:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.760 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.760 { 00:18:14.760 "cntlid": 137, 00:18:14.761 "qid": 0, 00:18:14.761 "state": "enabled", 00:18:14.761 "thread": "nvmf_tgt_poll_group_000", 00:18:14.761 "listen_address": { 00:18:14.761 "trtype": "TCP", 00:18:14.761 "adrfam": "IPv4", 00:18:14.761 "traddr": "10.0.0.2", 00:18:14.761 "trsvcid": "4420" 00:18:14.761 }, 00:18:14.761 "peer_address": { 00:18:14.761 "trtype": "TCP", 00:18:14.761 "adrfam": "IPv4", 00:18:14.761 "traddr": "10.0.0.1", 00:18:14.761 "trsvcid": "36720" 00:18:14.761 }, 00:18:14.761 "auth": { 00:18:14.761 "state": "completed", 00:18:14.761 "digest": "sha512", 00:18:14.761 "dhgroup": "ffdhe8192" 00:18:14.761 } 00:18:14.761 } 00:18:14.761 ]' 00:18:14.761 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.020 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.279 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.849 14:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.849 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.419 00:18:16.419 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.419 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.419 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.679 { 00:18:16.679 "cntlid": 139, 00:18:16.679 "qid": 0, 00:18:16.679 "state": "enabled", 00:18:16.679 "thread": "nvmf_tgt_poll_group_000", 00:18:16.679 "listen_address": { 00:18:16.679 "trtype": "TCP", 00:18:16.679 "adrfam": "IPv4", 00:18:16.679 "traddr": "10.0.0.2", 00:18:16.679 "trsvcid": "4420" 00:18:16.679 }, 00:18:16.679 "peer_address": { 00:18:16.679 "trtype": "TCP", 00:18:16.679 "adrfam": "IPv4", 00:18:16.679 "traddr": "10.0.0.1", 00:18:16.679 "trsvcid": "36758" 00:18:16.679 }, 00:18:16.679 "auth": { 00:18:16.679 "state": "completed", 00:18:16.679 "digest": "sha512", 00:18:16.679 "dhgroup": "ffdhe8192" 00:18:16.679 } 00:18:16.679 } 00:18:16.679 ]' 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.679 14:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.938 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWNkNmJhNWJlYTA3NzJiZjFkYzMxOGE3ZWQwOTE1NzUSlKdc: --dhchap-ctrl-secret DHHC-1:02:YjQ0MjgwM2I4MzJiZWMzNDY5MzkzOGJjODIyMWZmYWNjY2M2ZWI4MDc0ZDAwMDBm7q6Z2A==: 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.507 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.508 14:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.508 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.508 14:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.078 00:18:18.078 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.078 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.078 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.338 { 00:18:18.338 "cntlid": 141, 00:18:18.338 "qid": 0, 00:18:18.338 "state": "enabled", 00:18:18.338 "thread": "nvmf_tgt_poll_group_000", 00:18:18.338 "listen_address": { 00:18:18.338 "trtype": "TCP", 00:18:18.338 "adrfam": "IPv4", 00:18:18.338 "traddr": "10.0.0.2", 00:18:18.338 "trsvcid": "4420" 00:18:18.338 }, 00:18:18.338 "peer_address": { 00:18:18.338 "trtype": "TCP", 00:18:18.338 "adrfam": "IPv4", 00:18:18.338 "traddr": "10.0.0.1", 00:18:18.338 "trsvcid": "49302" 00:18:18.338 }, 00:18:18.338 "auth": { 00:18:18.338 "state": "completed", 00:18:18.338 "digest": "sha512", 00:18:18.338 "dhgroup": "ffdhe8192" 00:18:18.338 } 00:18:18.338 } 00:18:18.338 ]' 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.338 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.598 14:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTE3ZTJlNTdlNTY2MjNiNTFhY2E2MjhjZDlmYjFlYTQyMzNlZTk3YWI0ZWUxZDlhvnxhsw==: --dhchap-ctrl-secret DHHC-1:01:ZjEwZWMwZWRlZjIzOGY0NjcwZjBhNmE1ZjYzMWNlZmSP0ihy: 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.168 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.428 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.687 00:18:19.687 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.687 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.687 14:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.947 { 00:18:19.947 "cntlid": 143, 00:18:19.947 "qid": 0, 00:18:19.947 "state": "enabled", 00:18:19.947 "thread": "nvmf_tgt_poll_group_000", 00:18:19.947 "listen_address": { 00:18:19.947 "trtype": "TCP", 00:18:19.947 "adrfam": "IPv4", 00:18:19.947 "traddr": "10.0.0.2", 00:18:19.947 "trsvcid": "4420" 00:18:19.947 }, 00:18:19.947 "peer_address": { 00:18:19.947 "trtype": "TCP", 00:18:19.947 "adrfam": "IPv4", 00:18:19.947 "traddr": "10.0.0.1", 00:18:19.947 "trsvcid": "49320" 00:18:19.947 }, 00:18:19.947 "auth": { 00:18:19.947 "state": "completed", 00:18:19.947 "digest": "sha512", 00:18:19.947 "dhgroup": "ffdhe8192" 00:18:19.947 } 00:18:19.947 } 00:18:19.947 ]' 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.947 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.206 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:18:20.777 14:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.777 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.037 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.607 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.607 { 00:18:21.607 "cntlid": 145, 00:18:21.607 "qid": 0, 00:18:21.607 "state": "enabled", 00:18:21.607 "thread": "nvmf_tgt_poll_group_000", 00:18:21.607 "listen_address": { 00:18:21.607 "trtype": "TCP", 00:18:21.607 "adrfam": "IPv4", 00:18:21.607 "traddr": "10.0.0.2", 00:18:21.607 "trsvcid": "4420" 00:18:21.607 }, 00:18:21.607 "peer_address": { 00:18:21.607 "trtype": "TCP", 00:18:21.607 "adrfam": "IPv4", 00:18:21.607 "traddr": "10.0.0.1", 00:18:21.607 "trsvcid": "49342" 00:18:21.607 }, 00:18:21.607 "auth": { 00:18:21.607 "state": "completed", 00:18:21.607 "digest": "sha512", 00:18:21.607 "dhgroup": "ffdhe8192" 00:18:21.607 } 00:18:21.607 } 00:18:21.607 ]' 00:18:21.607 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.867 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.867 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.867 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.867 14:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.867 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.867 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.867 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.127 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzE2MmUwMjAzYTRhMDU0Y2FlY2JlZjc5N2FhNzNiY2I1MWZiZmE2M2U0NDQ5MWYyOWoK/w==: --dhchap-ctrl-secret DHHC-1:03:ODY1MzNmMjA1YmNiYjdkMDQ4Y2ViNzU5YmUyYTdhMDRlNWNkMWU2MDBlODNjMDlkMjZmNWU5MzJiMzc2MmEwMcCQ65w=: 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:22.697 14:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:22.957 request: 00:18:22.957 { 00:18:22.957 "name": "nvme0", 00:18:22.957 "trtype": "tcp", 00:18:22.957 "traddr": "10.0.0.2", 00:18:22.957 "adrfam": "ipv4", 00:18:22.957 "trsvcid": "4420", 00:18:22.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.957 "prchk_reftag": false, 00:18:22.957 "prchk_guard": false, 00:18:22.957 "hdgst": false, 00:18:22.957 "ddgst": false, 00:18:22.957 "dhchap_key": "key2", 00:18:22.957 "method": "bdev_nvme_attach_controller", 00:18:22.957 "req_id": 1 00:18:22.957 } 00:18:22.957 Got JSON-RPC error response 00:18:22.957 response: 00:18:22.957 { 00:18:22.957 "code": -5, 00:18:22.957 "message": "Input/output error" 00:18:22.957 } 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.957 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:23.527 request: 00:18:23.527 { 00:18:23.527 "name": "nvme0", 00:18:23.527 "trtype": "tcp", 00:18:23.527 "traddr": "10.0.0.2", 00:18:23.527 "adrfam": "ipv4", 00:18:23.527 "trsvcid": "4420", 00:18:23.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.527 "prchk_reftag": false, 00:18:23.527 "prchk_guard": false, 00:18:23.527 "hdgst": false, 00:18:23.527 "ddgst": false, 00:18:23.527 "dhchap_key": "key1", 00:18:23.527 "dhchap_ctrlr_key": "ckey2", 00:18:23.527 "method": "bdev_nvme_attach_controller", 00:18:23.527 "req_id": 1 00:18:23.527 } 00:18:23.527 Got JSON-RPC error response 00:18:23.527 response: 00:18:23.527 { 00:18:23.527 "code": -5, 00:18:23.527 "message": "Input/output error" 00:18:23.527 } 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.527 14:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.787 request: 00:18:23.787 { 00:18:23.787 "name": "nvme0", 00:18:23.787 "trtype": "tcp", 00:18:23.787 "traddr": "10.0.0.2", 00:18:23.787 "adrfam": "ipv4", 00:18:23.787 "trsvcid": "4420", 00:18:23.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.787 "prchk_reftag": false, 00:18:23.787 "prchk_guard": false, 00:18:23.787 "hdgst": false, 00:18:23.787 "ddgst": false, 00:18:23.787 "dhchap_key": "key1", 00:18:23.787 "dhchap_ctrlr_key": "ckey1", 00:18:23.787 "method": "bdev_nvme_attach_controller", 00:18:23.787 "req_id": 1 00:18:23.787 } 00:18:23.787 Got JSON-RPC error response 00:18:23.787 response: 00:18:23.787 { 00:18:23.787 "code": -5, 00:18:23.787 "message": "Input/output error" 00:18:23.787 } 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.787 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.047 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.047 14:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2323517 ']' 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323517' 00:18:24.048 killing process with pid 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2323517 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2344172 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2344172 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2344172 ']' 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.048 14:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2344172 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2344172 ']' 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.989 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.249 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.250 14:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.821 00:18:25.821 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.821 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.821 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.081 { 00:18:26.081 "cntlid": 1, 00:18:26.081 "qid": 0, 00:18:26.081 "state": "enabled", 00:18:26.081 "thread": "nvmf_tgt_poll_group_000", 00:18:26.081 "listen_address": { 00:18:26.081 "trtype": "TCP", 00:18:26.081 "adrfam": "IPv4", 00:18:26.081 "traddr": "10.0.0.2", 00:18:26.081 "trsvcid": "4420" 00:18:26.081 }, 00:18:26.081 "peer_address": { 00:18:26.081 "trtype": "TCP", 00:18:26.081 "adrfam": "IPv4", 00:18:26.081 "traddr": "10.0.0.1", 00:18:26.081 "trsvcid": "49370" 00:18:26.081 }, 00:18:26.081 "auth": { 00:18:26.081 "state": "completed", 00:18:26.081 "digest": "sha512", 00:18:26.081 "dhgroup": "ffdhe8192" 00:18:26.081 } 00:18:26.081 } 00:18:26.081 ]' 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.081 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.342 14:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZmIyY2ZjMjM5YjVjNjM1NGUzN2JjOTdmYzIxMWMwZDY5ZDAwZTAxNWYwNDQ3YTJiMzZkYjI5N2MyMmFjOTMyNpH/kh8=: 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:26.947 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.238 request: 00:18:27.238 { 00:18:27.238 "name": "nvme0", 00:18:27.238 "trtype": "tcp", 00:18:27.238 "traddr": "10.0.0.2", 00:18:27.238 "adrfam": "ipv4", 00:18:27.238 "trsvcid": "4420", 00:18:27.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:27.238 "prchk_reftag": false, 00:18:27.238 "prchk_guard": false, 00:18:27.238 "hdgst": false, 00:18:27.238 "ddgst": false, 00:18:27.238 "dhchap_key": "key3", 00:18:27.238 "method": "bdev_nvme_attach_controller", 00:18:27.238 "req_id": 1 00:18:27.238 } 00:18:27.238 Got JSON-RPC error response 00:18:27.238 response: 00:18:27.238 { 00:18:27.238 "code": -5, 00:18:27.238 "message": "Input/output error" 00:18:27.238 } 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:27.238 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.239 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.499 request: 00:18:27.499 { 00:18:27.499 "name": "nvme0", 00:18:27.499 "trtype": "tcp", 00:18:27.499 "traddr": "10.0.0.2", 00:18:27.499 "adrfam": "ipv4", 00:18:27.499 "trsvcid": "4420", 00:18:27.499 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:27.499 "prchk_reftag": false, 00:18:27.499 "prchk_guard": false, 00:18:27.499 "hdgst": false, 00:18:27.499 "ddgst": false, 00:18:27.499 "dhchap_key": "key3", 00:18:27.499 "method": "bdev_nvme_attach_controller", 00:18:27.499 "req_id": 1 00:18:27.499 } 00:18:27.499 Got JSON-RPC error response 00:18:27.499 response: 00:18:27.499 { 00:18:27.499 "code": -5, 00:18:27.499 "message": "Input/output error" 00:18:27.499 } 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.499 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.759 14:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.019 request: 00:18:28.019 { 00:18:28.019 "name": "nvme0", 00:18:28.019 "trtype": "tcp", 00:18:28.019 "traddr": "10.0.0.2", 00:18:28.019 "adrfam": "ipv4", 00:18:28.019 "trsvcid": "4420", 00:18:28.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.019 "prchk_reftag": false, 00:18:28.019 "prchk_guard": false, 00:18:28.019 "hdgst": false, 00:18:28.019 "ddgst": false, 00:18:28.019 "dhchap_key": "key0", 00:18:28.019 "dhchap_ctrlr_key": "key1", 00:18:28.019 "method": "bdev_nvme_attach_controller", 00:18:28.019 "req_id": 1 00:18:28.019 } 00:18:28.019 Got JSON-RPC error response 00:18:28.019 response: 00:18:28.019 { 00:18:28.019 "code": -5, 00:18:28.019 "message": "Input/output error" 00:18:28.019 } 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.020 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.279 00:18:28.280 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:28.280 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:28.280 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2323671 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2323671 ']' 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2323671 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2323671 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2323671' 00:18:28.540 killing process with pid 2323671 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2323671 00:18:28.540 14:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2323671 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.109 rmmod nvme_tcp 00:18:29.109 rmmod nvme_fabrics 00:18:29.109 rmmod nvme_keyring 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2344172 ']' 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2344172 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2344172 ']' 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2344172 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344172 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344172' 00:18:29.109 killing process with pid 2344172 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2344172 00:18:29.109 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2344172 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.368 14:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.278 14:45:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.278 14:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.viT /tmp/spdk.key-sha256.GJu /tmp/spdk.key-sha384.RFU /tmp/spdk.key-sha512.Wjf /tmp/spdk.key-sha512.uBD /tmp/spdk.key-sha384.0Yv /tmp/spdk.key-sha256.0wv '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:31.278 00:18:31.278 real 2m8.985s 00:18:31.278 user 4m56.139s 00:18:31.278 sys 0m18.606s 00:18:31.278 14:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.278 14:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.278 ************************************ 00:18:31.278 END TEST nvmf_auth_target 00:18:31.278 ************************************ 00:18:31.278 14:45:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.278 14:45:51 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:31.278 14:45:51 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.278 14:45:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:31.279 14:45:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.279 14:45:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.279 ************************************ 00:18:31.279 START TEST nvmf_bdevio_no_huge 00:18:31.279 ************************************ 00:18:31.279 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.539 * Looking for test storage... 00:18:31.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.539 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.540 14:45:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:36.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:36.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:36.821 Found net devices under 0000:86:00.0: cvl_0_0 00:18:36.821 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:36.822 Found net devices under 0000:86:00.1: cvl_0_1 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:18:36.822 00:18:36.822 --- 10.0.0.2 ping statistics --- 00:18:36.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.822 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:18:36.822 00:18:36.822 --- 10.0.0.1 ping statistics --- 00:18:36.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.822 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2348436 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2348436 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2348436 ']' 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.822 14:45:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:36.822 [2024-07-25 14:45:56.656741] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:18:36.822 [2024-07-25 14:45:56.656784] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:36.822 [2024-07-25 14:45:56.722286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.822 [2024-07-25 14:45:56.807211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.822 [2024-07-25 14:45:56.807244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.822 [2024-07-25 14:45:56.807250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.822 [2024-07-25 14:45:56.807258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.822 [2024-07-25 14:45:56.807263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.822 [2024-07-25 14:45:56.807384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:36.822 [2024-07-25 14:45:56.807494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:36.822 [2024-07-25 14:45:56.807600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.822 [2024-07-25 14:45:56.807600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 [2024-07-25 14:45:57.494979] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 Malloc0 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 [2024-07-25 14:45:57.531193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:37.392 { 00:18:37.392 "params": { 00:18:37.392 "name": "Nvme$subsystem", 00:18:37.392 "trtype": "$TEST_TRANSPORT", 00:18:37.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.392 "adrfam": "ipv4", 00:18:37.392 "trsvcid": "$NVMF_PORT", 00:18:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.392 "hdgst": ${hdgst:-false}, 00:18:37.392 "ddgst": ${ddgst:-false} 00:18:37.392 }, 00:18:37.392 "method": "bdev_nvme_attach_controller" 00:18:37.392 } 00:18:37.392 EOF 00:18:37.392 )") 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:37.392 14:45:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:37.392 "params": { 00:18:37.392 "name": "Nvme1", 00:18:37.392 "trtype": "tcp", 00:18:37.392 "traddr": "10.0.0.2", 00:18:37.392 "adrfam": "ipv4", 00:18:37.392 "trsvcid": "4420", 00:18:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.392 "hdgst": false, 00:18:37.392 "ddgst": false 00:18:37.392 }, 00:18:37.392 "method": "bdev_nvme_attach_controller" 00:18:37.392 }' 00:18:37.392 [2024-07-25 14:45:57.580822] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:18:37.392 [2024-07-25 14:45:57.580875] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2348464 ] 00:18:37.392 [2024-07-25 14:45:57.641101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.651 [2024-07-25 14:45:57.728506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.651 [2024-07-25 14:45:57.728599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.651 [2024-07-25 14:45:57.728599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.911 I/O targets: 00:18:37.911 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:37.911 00:18:37.911 00:18:37.911 CUnit - A unit testing framework for C - Version 2.1-3 00:18:37.911 http://cunit.sourceforge.net/ 00:18:37.911 00:18:37.911 00:18:37.911 Suite: bdevio tests on: Nvme1n1 00:18:37.911 Test: blockdev write read block ...passed 00:18:37.911 Test: blockdev write zeroes read block ...passed 00:18:37.911 Test: blockdev write zeroes read no split ...passed 00:18:37.911 Test: blockdev write zeroes read split ...passed 00:18:38.170 Test: blockdev write zeroes read split partial ...passed 00:18:38.170 Test: blockdev reset ...[2024-07-25 14:45:58.238489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.170 [2024-07-25 14:45:58.238552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c18300 (9): Bad file descriptor 00:18:38.170 [2024-07-25 14:45:58.258261] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.170 passed 00:18:38.170 Test: blockdev write read 8 blocks ...passed 00:18:38.170 Test: blockdev write read size > 128k ...passed 00:18:38.170 Test: blockdev write read invalid size ...passed 00:18:38.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.170 Test: blockdev write read max offset ...passed 00:18:38.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:38.170 Test: blockdev writev readv 8 blocks ...passed 00:18:38.170 Test: blockdev writev readv 30 x 1block ...passed 00:18:38.430 Test: blockdev writev readv block ...passed 00:18:38.430 Test: blockdev writev readv size > 128k ...passed 00:18:38.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:38.430 Test: blockdev comparev and writev ...[2024-07-25 14:45:58.498398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.498439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.498448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.498920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.498934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.498946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.498953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.499410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.499420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.499432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.499439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.499888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.499899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.499910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.431 [2024-07-25 14:45:58.499917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:38.431 passed 00:18:38.431 Test: blockdev nvme passthru rw ...passed 00:18:38.431 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:45:58.583964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.431 [2024-07-25 14:45:58.583979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.584384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.431 [2024-07-25 14:45:58.584394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.584787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.431 [2024-07-25 14:45:58.584797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:38.431 [2024-07-25 14:45:58.585189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.431 [2024-07-25 14:45:58.585199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:38.431 passed 00:18:38.431 Test: blockdev nvme admin passthru ...passed 00:18:38.431 Test: blockdev copy ...passed 00:18:38.431 00:18:38.431 Run Summary: Type Total Ran Passed Failed Inactive 00:18:38.431 suites 1 1 n/a 0 0 00:18:38.431 tests 23 23 23 0 0 00:18:38.431 asserts 152 152 152 0 n/a 00:18:38.431 00:18:38.431 Elapsed time = 1.222 seconds 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.691 rmmod nvme_tcp 00:18:38.691 rmmod nvme_fabrics 00:18:38.691 rmmod nvme_keyring 00:18:38.691 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2348436 ']' 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2348436 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2348436 ']' 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2348436 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.952 14:45:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348436 00:18:38.952 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:38.952 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:38.952 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348436' 00:18:38.952 killing process with pid 2348436 00:18:38.952 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2348436 00:18:38.952 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2348436 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.212 14:45:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.121 14:46:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.121 00:18:41.121 real 0m9.857s 00:18:41.121 user 0m13.720s 00:18:41.121 sys 0m4.582s 00:18:41.121 14:46:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.121 14:46:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.121 ************************************ 00:18:41.121 END TEST nvmf_bdevio_no_huge 00:18:41.121 ************************************ 00:18:41.381 14:46:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.381 14:46:01 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.381 14:46:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.381 14:46:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.381 14:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.381 ************************************ 00:18:41.381 START TEST nvmf_tls 00:18:41.381 ************************************ 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.381 * Looking for test storage... 00:18:41.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.381 14:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.660 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:46.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:46.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:46.661 Found net devices under 0000:86:00.0: cvl_0_0 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:46.661 Found net devices under 0000:86:00.1: cvl_0_1 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.661 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.921 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.921 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.921 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.921 14:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:18:46.921 00:18:46.921 --- 10.0.0.2 ping statistics --- 00:18:46.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.921 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:18:46.921 00:18:46.921 --- 10.0.0.1 ping statistics --- 00:18:46.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.921 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2352232 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2352232 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2352232 ']' 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.921 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.921 [2024-07-25 14:46:07.188083] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:18:46.921 [2024-07-25 14:46:07.188128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.182 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.182 [2024-07-25 14:46:07.246201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.182 [2024-07-25 14:46:07.324598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.182 [2024-07-25 14:46:07.324633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.182 [2024-07-25 14:46:07.324640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.182 [2024-07-25 14:46:07.324646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.182 [2024-07-25 14:46:07.324651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.182 [2024-07-25 14:46:07.324684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.751 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.751 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:47.751 14:46:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.751 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.751 14:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.751 14:46:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.751 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:47.751 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.137 true 00:18:48.137 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.137 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:48.137 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:48.137 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:48.137 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.396 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.396 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:48.656 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:48.656 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:48.656 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:48.656 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.656 14:46:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:48.915 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:48.915 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:48.915 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.915 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:49.174 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:49.174 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:49.174 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:49.174 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.174 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:49.433 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:49.433 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:49.433 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:49.693 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.CdM1fzK8Bl 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.uOmXyIJmWq 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.CdM1fzK8Bl 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uOmXyIJmWq 00:18:49.953 14:46:09 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:49.953 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:50.212 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.CdM1fzK8Bl 00:18:50.212 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CdM1fzK8Bl 00:18:50.212 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.471 [2024-07-25 14:46:10.571007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.471 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.471 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:50.731 [2024-07-25 14:46:10.883796] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.731 [2024-07-25 14:46:10.883970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.731 14:46:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:50.990 malloc0 00:18:50.990 14:46:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.990 14:46:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CdM1fzK8Bl 00:18:51.249 [2024-07-25 14:46:11.361031] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.249 14:46:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.CdM1fzK8Bl 00:18:51.249 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.240 Initializing NVMe Controllers 00:19:01.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:01.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:01.240 Initialization complete. Launching workers. 00:19:01.240 ======================================================== 00:19:01.240 Latency(us) 00:19:01.240 Device Information : IOPS MiB/s Average min max 00:19:01.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16487.90 64.41 3882.08 822.96 5360.18 00:19:01.240 ======================================================== 00:19:01.240 Total : 16487.90 64.41 3882.08 822.96 5360.18 00:19:01.240 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CdM1fzK8Bl 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CdM1fzK8Bl' 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2354592 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2354592 /var/tmp/bdevperf.sock 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2354592 ']' 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.240 14:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.240 [2024-07-25 14:46:21.523035] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:01.240 [2024-07-25 14:46:21.523091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354592 ] 00:19:01.500 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.500 [2024-07-25 14:46:21.572650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.500 [2024-07-25 14:46:21.651170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.069 14:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.069 14:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.069 14:46:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CdM1fzK8Bl 00:19:02.328 [2024-07-25 14:46:22.493108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.328 [2024-07-25 14:46:22.493187] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:02.328 TLSTESTn1 00:19:02.328 14:46:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.588 Running I/O for 10 seconds... 00:19:14.799 00:19:14.799 Latency(us) 00:19:14.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.799 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.799 Verification LBA range: start 0x0 length 0x2000 00:19:14.799 TLSTESTn1 : 10.15 1151.80 4.50 0.00 0.00 110539.66 6468.12 170507.58 00:19:14.799 =================================================================================================================== 00:19:14.799 Total : 1151.80 4.50 0.00 0.00 110539.66 6468.12 170507.58 00:19:14.799 0 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2354592 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2354592 ']' 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2354592 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354592 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354592' 00:19:14.799 killing process with pid 2354592 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2354592 00:19:14.799 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.799 00:19:14.799 Latency(us) 00:19:14.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.799 =================================================================================================================== 00:19:14.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.799 [2024-07-25 14:46:32.938299] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.799 14:46:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2354592 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uOmXyIJmWq 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uOmXyIJmWq 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uOmXyIJmWq 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uOmXyIJmWq' 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2356630 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2356630 /var/tmp/bdevperf.sock 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2356630 ']' 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.799 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.799 [2024-07-25 14:46:33.166882] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:14.800 [2024-07-25 14:46:33.166930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356630 ] 00:19:14.800 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.800 [2024-07-25 14:46:33.216253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.800 [2024-07-25 14:46:33.293747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.800 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.800 14:46:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:14.800 14:46:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uOmXyIJmWq 00:19:14.800 [2024-07-25 14:46:34.135518] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.800 [2024-07-25 14:46:34.135585] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.800 [2024-07-25 14:46:34.142644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.800 [2024-07-25 14:46:34.144067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x974570 (107): Transport endpoint is not connected 00:19:14.800 [2024-07-25 14:46:34.145062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x974570 (9): Bad file descriptor 00:19:14.800 [2024-07-25 14:46:34.146064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.800 [2024-07-25 14:46:34.146073] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.800 [2024-07-25 14:46:34.146085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.800 request: 00:19:14.800 { 00:19:14.800 "name": "TLSTEST", 00:19:14.800 "trtype": "tcp", 00:19:14.800 "traddr": "10.0.0.2", 00:19:14.800 "adrfam": "ipv4", 00:19:14.800 "trsvcid": "4420", 00:19:14.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.800 "prchk_reftag": false, 00:19:14.800 "prchk_guard": false, 00:19:14.800 "hdgst": false, 00:19:14.800 "ddgst": false, 00:19:14.800 "psk": "/tmp/tmp.uOmXyIJmWq", 00:19:14.800 "method": "bdev_nvme_attach_controller", 00:19:14.800 "req_id": 1 00:19:14.800 } 00:19:14.800 Got JSON-RPC error response 00:19:14.800 response: 00:19:14.800 { 00:19:14.800 "code": -5, 00:19:14.800 "message": "Input/output error" 00:19:14.800 } 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2356630 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2356630 ']' 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2356630 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356630 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356630' 00:19:14.800 killing process with pid 2356630 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2356630 00:19:14.800 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.800 00:19:14.800 Latency(us) 00:19:14.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.800 =================================================================================================================== 00:19:14.800 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.800 [2024-07-25 14:46:34.220441] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2356630 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CdM1fzK8Bl 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CdM1fzK8Bl 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CdM1fzK8Bl 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CdM1fzK8Bl' 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2356827 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2356827 /var/tmp/bdevperf.sock 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2356827 ']' 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.800 14:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.800 [2024-07-25 14:46:34.443475] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:14.800 [2024-07-25 14:46:34.443522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356827 ] 00:19:14.800 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.800 [2024-07-25 14:46:34.493362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.800 [2024-07-25 14:46:34.571806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.059 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.059 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.059 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.CdM1fzK8Bl 00:19:15.320 [2024-07-25 14:46:35.409605] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.320 [2024-07-25 14:46:35.409674] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.320 [2024-07-25 14:46:35.415818] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.320 [2024-07-25 14:46:35.415839] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.320 [2024-07-25 14:46:35.415866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.320 [2024-07-25 14:46:35.417216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08570 (107): Transport endpoint is not connected 00:19:15.320 [2024-07-25 14:46:35.418209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08570 (9): Bad file descriptor 00:19:15.320 [2024-07-25 14:46:35.419210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.320 [2024-07-25 14:46:35.419219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.320 [2024-07-25 14:46:35.419228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.320 request: 00:19:15.320 { 00:19:15.320 "name": "TLSTEST", 00:19:15.320 "trtype": "tcp", 00:19:15.320 "traddr": "10.0.0.2", 00:19:15.320 "adrfam": "ipv4", 00:19:15.320 "trsvcid": "4420", 00:19:15.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.320 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.320 "prchk_reftag": false, 00:19:15.320 "prchk_guard": false, 00:19:15.320 "hdgst": false, 00:19:15.320 "ddgst": false, 00:19:15.320 "psk": "/tmp/tmp.CdM1fzK8Bl", 00:19:15.320 "method": "bdev_nvme_attach_controller", 00:19:15.320 "req_id": 1 00:19:15.320 } 00:19:15.320 Got JSON-RPC error response 00:19:15.320 response: 00:19:15.320 { 00:19:15.320 "code": -5, 00:19:15.320 "message": "Input/output error" 00:19:15.320 } 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2356827 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2356827 ']' 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2356827 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356827 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356827' 00:19:15.320 killing process with pid 2356827 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2356827 00:19:15.320 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.320 00:19:15.320 Latency(us) 00:19:15.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.320 =================================================================================================================== 00:19:15.320 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.320 [2024-07-25 14:46:35.481964] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.320 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2356827 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CdM1fzK8Bl 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CdM1fzK8Bl 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CdM1fzK8Bl 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CdM1fzK8Bl' 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2356982 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2356982 /var/tmp/bdevperf.sock 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2356982 ']' 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.580 14:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.580 [2024-07-25 14:46:35.708917] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:15.580 [2024-07-25 14:46:35.708969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356982 ] 00:19:15.580 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.580 [2024-07-25 14:46:35.761581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.580 [2024-07-25 14:46:35.833758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CdM1fzK8Bl 00:19:16.519 [2024-07-25 14:46:36.656334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.519 [2024-07-25 14:46:36.656403] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.519 [2024-07-25 14:46:36.661295] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.519 [2024-07-25 14:46:36.661316] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.519 [2024-07-25 14:46:36.661339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.519 [2024-07-25 14:46:36.661980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174e570 (107): Transport endpoint is not connected 00:19:16.519 [2024-07-25 14:46:36.662972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174e570 (9): Bad file descriptor 00:19:16.519 [2024-07-25 14:46:36.663973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:16.519 [2024-07-25 14:46:36.663982] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.519 [2024-07-25 14:46:36.663991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:16.519 request: 00:19:16.519 { 00:19:16.519 "name": "TLSTEST", 00:19:16.519 "trtype": "tcp", 00:19:16.519 "traddr": "10.0.0.2", 00:19:16.519 "adrfam": "ipv4", 00:19:16.519 "trsvcid": "4420", 00:19:16.519 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:16.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.519 "prchk_reftag": false, 00:19:16.519 "prchk_guard": false, 00:19:16.519 "hdgst": false, 00:19:16.519 "ddgst": false, 00:19:16.519 "psk": "/tmp/tmp.CdM1fzK8Bl", 00:19:16.519 "method": "bdev_nvme_attach_controller", 00:19:16.519 "req_id": 1 00:19:16.519 } 00:19:16.519 Got JSON-RPC error response 00:19:16.519 response: 00:19:16.519 { 00:19:16.519 "code": -5, 00:19:16.519 "message": "Input/output error" 00:19:16.519 } 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2356982 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2356982 ']' 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2356982 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356982 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356982' 00:19:16.519 killing process with pid 2356982 00:19:16.519 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2356982 00:19:16.519 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.519 00:19:16.519 Latency(us) 00:19:16.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.520 =================================================================================================================== 00:19:16.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.520 [2024-07-25 14:46:36.726786] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.520 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2356982 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:16.784 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.789 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2357140 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2357140 /var/tmp/bdevperf.sock 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2357140 ']' 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.790 14:46:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.790 [2024-07-25 14:46:36.946365] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:16.790 [2024-07-25 14:46:36.946416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357140 ] 00:19:16.790 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.790 [2024-07-25 14:46:36.997699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.055 [2024-07-25 14:46:37.076976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.624 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.624 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.624 14:46:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:17.624 [2024-07-25 14:46:37.902753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.624 [2024-07-25 14:46:37.904641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22af0 (9): Bad file descriptor 00:19:17.624 [2024-07-25 14:46:37.905639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.624 [2024-07-25 14:46:37.905653] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.624 [2024-07-25 14:46:37.905661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.624 request: 00:19:17.624 { 00:19:17.624 "name": "TLSTEST", 00:19:17.624 "trtype": "tcp", 00:19:17.624 "traddr": "10.0.0.2", 00:19:17.624 "adrfam": "ipv4", 00:19:17.624 "trsvcid": "4420", 00:19:17.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.625 "prchk_reftag": false, 00:19:17.625 "prchk_guard": false, 00:19:17.625 "hdgst": false, 00:19:17.625 "ddgst": false, 00:19:17.625 "method": "bdev_nvme_attach_controller", 00:19:17.625 "req_id": 1 00:19:17.625 } 00:19:17.625 Got JSON-RPC error response 00:19:17.625 response: 00:19:17.625 { 00:19:17.625 "code": -5, 00:19:17.625 "message": "Input/output error" 00:19:17.625 } 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2357140 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2357140 ']' 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2357140 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2357140 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2357140' 00:19:17.884 killing process with pid 2357140 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2357140 00:19:17.884 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.884 00:19:17.884 Latency(us) 00:19:17.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.884 =================================================================================================================== 00:19:17.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.884 14:46:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2357140 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2352232 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2352232 ']' 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2352232 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.884 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2352232 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2352232' 00:19:18.144 killing process with pid 2352232 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2352232 00:19:18.144 [2024-07-25 14:46:38.185760] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2352232 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.XJLU5LmErb 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.XJLU5LmErb 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:18.144 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.403 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2357470 00:19:18.403 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2357470 00:19:18.403 14:46:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2357470 ']' 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.404 14:46:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.404 [2024-07-25 14:46:38.486396] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:18.404 [2024-07-25 14:46:38.486448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.404 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.404 [2024-07-25 14:46:38.546104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.404 [2024-07-25 14:46:38.619441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.404 [2024-07-25 14:46:38.619481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.404 [2024-07-25 14:46:38.619488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.404 [2024-07-25 14:46:38.619493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.404 [2024-07-25 14:46:38.619498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.404 [2024-07-25 14:46:38.619534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XJLU5LmErb 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.343 [2024-07-25 14:46:39.457821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.343 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.603 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.603 [2024-07-25 14:46:39.802716] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.603 [2024-07-25 14:46:39.802903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.603 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.863 malloc0 00:19:19.863 14:46:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:20.123 [2024-07-25 14:46:40.320267] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJLU5LmErb 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XJLU5LmErb' 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2357858 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2357858 /var/tmp/bdevperf.sock 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2357858 ']' 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.123 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.123 [2024-07-25 14:46:40.369630] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:20.123 [2024-07-25 14:46:40.369678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357858 ] 00:19:20.123 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.382 [2024-07-25 14:46:40.420793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.382 [2024-07-25 14:46:40.494060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.382 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.382 14:46:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:20.382 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:20.642 [2024-07-25 14:46:40.734399] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.642 [2024-07-25 14:46:40.734492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.642 TLSTESTn1 00:19:20.642 14:46:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.642 Running I/O for 10 seconds... 00:19:32.864 00:19:32.864 Latency(us) 00:19:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.864 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.864 Verification LBA range: start 0x0 length 0x2000 00:19:32.864 TLSTESTn1 : 10.09 1200.34 4.69 0.00 0.00 106276.50 6468.12 157742.30 00:19:32.864 =================================================================================================================== 00:19:32.864 Total : 1200.34 4.69 0.00 0.00 106276.50 6468.12 157742.30 00:19:32.864 0 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2357858 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2357858 ']' 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2357858 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2357858 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2357858' 00:19:32.864 killing process with pid 2357858 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2357858 00:19:32.864 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.864 00:19:32.864 Latency(us) 00:19:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.864 =================================================================================================================== 00:19:32.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.864 [2024-07-25 14:46:51.103203] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2357858 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.XJLU5LmErb 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJLU5LmErb 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJLU5LmErb 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJLU5LmErb 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XJLU5LmErb' 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2359686 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2359686 /var/tmp/bdevperf.sock 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2359686 ']' 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.864 14:46:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.864 [2024-07-25 14:46:51.335287] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:32.864 [2024-07-25 14:46:51.335340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359686 ] 00:19:32.864 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.864 [2024-07-25 14:46:51.386235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.864 [2024-07-25 14:46:51.453870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:32.864 [2024-07-25 14:46:52.291549] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.864 [2024-07-25 14:46:52.291600] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:32.864 [2024-07-25 14:46:52.291607] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.XJLU5LmErb 00:19:32.864 request: 00:19:32.864 { 00:19:32.864 "name": "TLSTEST", 00:19:32.864 "trtype": "tcp", 00:19:32.864 "traddr": "10.0.0.2", 00:19:32.864 "adrfam": "ipv4", 00:19:32.864 "trsvcid": "4420", 00:19:32.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.864 "prchk_reftag": false, 00:19:32.864 "prchk_guard": false, 00:19:32.864 "hdgst": false, 00:19:32.864 "ddgst": false, 00:19:32.864 "psk": "/tmp/tmp.XJLU5LmErb", 00:19:32.864 "method": "bdev_nvme_attach_controller", 00:19:32.864 "req_id": 1 00:19:32.864 } 00:19:32.864 Got JSON-RPC error response 00:19:32.864 response: 00:19:32.864 { 00:19:32.864 "code": -1, 00:19:32.864 "message": "Operation not permitted" 00:19:32.864 } 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2359686 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2359686 ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2359686 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359686 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359686' 00:19:32.864 killing process with pid 2359686 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2359686 00:19:32.864 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.864 00:19:32.864 Latency(us) 00:19:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.864 =================================================================================================================== 00:19:32.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2359686 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2357470 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2357470 ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2357470 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2357470 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2357470' 00:19:32.864 killing process with pid 2357470 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2357470 00:19:32.864 [2024-07-25 14:46:52.579267] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2357470 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.864 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2359929 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2359929 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2359929 ']' 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.865 14:46:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.865 [2024-07-25 14:46:52.830667] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:32.865 [2024-07-25 14:46:52.830715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.865 [2024-07-25 14:46:52.888738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.865 [2024-07-25 14:46:52.964965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.865 [2024-07-25 14:46:52.965005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.865 [2024-07-25 14:46:52.965012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.865 [2024-07-25 14:46:52.965018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.865 [2024-07-25 14:46:52.965023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.865 [2024-07-25 14:46:52.965061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XJLU5LmErb 00:19:33.434 14:46:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.693 [2024-07-25 14:46:53.821184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.693 14:46:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.953 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.953 [2024-07-25 14:46:54.170081] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.953 [2024-07-25 14:46:54.170280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.953 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:34.212 malloc0 00:19:34.212 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:34.472 [2024-07-25 14:46:54.679466] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:34.472 [2024-07-25 14:46:54.679490] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:34.472 [2024-07-25 14:46:54.679511] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:34.472 request: 00:19:34.472 { 00:19:34.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.472 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.472 "psk": "/tmp/tmp.XJLU5LmErb", 00:19:34.472 "method": "nvmf_subsystem_add_host", 00:19:34.472 "req_id": 1 00:19:34.472 } 00:19:34.472 Got JSON-RPC error response 00:19:34.472 response: 00:19:34.472 { 00:19:34.472 "code": -32603, 00:19:34.472 "message": "Internal error" 00:19:34.472 } 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2359929 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2359929 ']' 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2359929 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359929 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359929' 00:19:34.472 killing process with pid 2359929 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2359929 00:19:34.472 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2359929 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.XJLU5LmErb 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2360204 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2360204 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360204 ']' 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.732 14:46:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.732 [2024-07-25 14:46:54.995910] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:34.732 [2024-07-25 14:46:54.995956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.732 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.993 [2024-07-25 14:46:55.052823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.993 [2024-07-25 14:46:55.128287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.993 [2024-07-25 14:46:55.128324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.993 [2024-07-25 14:46:55.128331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.993 [2024-07-25 14:46:55.128337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.993 [2024-07-25 14:46:55.128341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.993 [2024-07-25 14:46:55.128377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XJLU5LmErb 00:19:35.563 14:46:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.823 [2024-07-25 14:46:55.976343] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.823 14:46:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.084 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.084 [2024-07-25 14:46:56.317192] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.084 [2024-07-25 14:46:56.317379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.084 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.344 malloc0 00:19:36.344 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:36.605 [2024-07-25 14:46:56.842803] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2360678 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2360678 /var/tmp/bdevperf.sock 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360678 ']' 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.605 14:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.865 [2024-07-25 14:46:56.905195] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:36.865 [2024-07-25 14:46:56.905244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360678 ] 00:19:36.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.865 [2024-07-25 14:46:56.955685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.865 [2024-07-25 14:46:57.032555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.437 14:46:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.437 14:46:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.437 14:46:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:37.697 [2024-07-25 14:46:57.859648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.697 [2024-07-25 14:46:57.859736] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.697 TLSTESTn1 00:19:37.697 14:46:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:37.958 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:37.958 "subsystems": [ 00:19:37.958 { 00:19:37.958 "subsystem": "keyring", 00:19:37.958 "config": [] 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "subsystem": "iobuf", 00:19:37.958 "config": [ 00:19:37.958 { 00:19:37.958 "method": "iobuf_set_options", 00:19:37.958 "params": { 00:19:37.958 "small_pool_count": 8192, 00:19:37.958 "large_pool_count": 1024, 00:19:37.958 "small_bufsize": 8192, 00:19:37.958 "large_bufsize": 135168 00:19:37.958 } 00:19:37.958 } 00:19:37.958 ] 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "subsystem": "sock", 00:19:37.958 "config": [ 00:19:37.958 { 00:19:37.958 "method": "sock_set_default_impl", 00:19:37.958 "params": { 00:19:37.958 "impl_name": "posix" 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "sock_impl_set_options", 00:19:37.958 "params": { 00:19:37.958 "impl_name": "ssl", 00:19:37.958 "recv_buf_size": 4096, 00:19:37.958 "send_buf_size": 4096, 00:19:37.958 "enable_recv_pipe": true, 00:19:37.958 "enable_quickack": false, 00:19:37.958 "enable_placement_id": 0, 00:19:37.958 "enable_zerocopy_send_server": true, 00:19:37.958 "enable_zerocopy_send_client": false, 00:19:37.958 "zerocopy_threshold": 0, 00:19:37.958 "tls_version": 0, 00:19:37.958 "enable_ktls": false 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "sock_impl_set_options", 00:19:37.958 "params": { 00:19:37.958 "impl_name": "posix", 00:19:37.958 "recv_buf_size": 2097152, 00:19:37.958 "send_buf_size": 2097152, 00:19:37.958 "enable_recv_pipe": true, 00:19:37.958 "enable_quickack": false, 00:19:37.958 "enable_placement_id": 0, 00:19:37.958 "enable_zerocopy_send_server": true, 00:19:37.958 "enable_zerocopy_send_client": false, 00:19:37.958 "zerocopy_threshold": 0, 00:19:37.958 "tls_version": 0, 00:19:37.958 "enable_ktls": false 00:19:37.958 } 00:19:37.958 } 00:19:37.958 ] 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "subsystem": "vmd", 00:19:37.958 "config": [] 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "subsystem": "accel", 00:19:37.958 "config": [ 00:19:37.958 { 00:19:37.958 "method": "accel_set_options", 00:19:37.958 "params": { 00:19:37.958 "small_cache_size": 128, 00:19:37.958 "large_cache_size": 16, 00:19:37.958 "task_count": 2048, 00:19:37.958 "sequence_count": 2048, 00:19:37.958 "buf_count": 2048 00:19:37.958 } 00:19:37.958 } 00:19:37.958 ] 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "subsystem": "bdev", 00:19:37.958 "config": [ 00:19:37.958 { 00:19:37.958 "method": "bdev_set_options", 00:19:37.958 "params": { 00:19:37.958 "bdev_io_pool_size": 65535, 00:19:37.958 "bdev_io_cache_size": 256, 00:19:37.958 "bdev_auto_examine": true, 00:19:37.958 "iobuf_small_cache_size": 128, 00:19:37.958 "iobuf_large_cache_size": 16 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "bdev_raid_set_options", 00:19:37.958 "params": { 00:19:37.958 "process_window_size_kb": 1024 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "bdev_iscsi_set_options", 00:19:37.958 "params": { 00:19:37.958 "timeout_sec": 30 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "bdev_nvme_set_options", 00:19:37.958 "params": { 00:19:37.958 "action_on_timeout": "none", 00:19:37.958 "timeout_us": 0, 00:19:37.958 "timeout_admin_us": 0, 00:19:37.958 "keep_alive_timeout_ms": 10000, 00:19:37.958 "arbitration_burst": 0, 00:19:37.958 "low_priority_weight": 0, 00:19:37.958 "medium_priority_weight": 0, 00:19:37.958 "high_priority_weight": 0, 00:19:37.958 "nvme_adminq_poll_period_us": 10000, 00:19:37.958 "nvme_ioq_poll_period_us": 0, 00:19:37.958 "io_queue_requests": 0, 00:19:37.958 "delay_cmd_submit": true, 00:19:37.958 "transport_retry_count": 4, 00:19:37.958 "bdev_retry_count": 3, 00:19:37.958 "transport_ack_timeout": 0, 00:19:37.958 "ctrlr_loss_timeout_sec": 0, 00:19:37.958 "reconnect_delay_sec": 0, 00:19:37.958 "fast_io_fail_timeout_sec": 0, 00:19:37.958 "disable_auto_failback": false, 00:19:37.958 "generate_uuids": false, 00:19:37.958 "transport_tos": 0, 00:19:37.958 "nvme_error_stat": false, 00:19:37.958 "rdma_srq_size": 0, 00:19:37.958 "io_path_stat": false, 00:19:37.958 "allow_accel_sequence": false, 00:19:37.958 "rdma_max_cq_size": 0, 00:19:37.958 "rdma_cm_event_timeout_ms": 0, 00:19:37.958 "dhchap_digests": [ 00:19:37.958 "sha256", 00:19:37.958 "sha384", 00:19:37.958 "sha512" 00:19:37.958 ], 00:19:37.958 "dhchap_dhgroups": [ 00:19:37.958 "null", 00:19:37.958 "ffdhe2048", 00:19:37.958 "ffdhe3072", 00:19:37.958 "ffdhe4096", 00:19:37.958 "ffdhe6144", 00:19:37.958 "ffdhe8192" 00:19:37.958 ] 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "bdev_nvme_set_hotplug", 00:19:37.958 "params": { 00:19:37.958 "period_us": 100000, 00:19:37.958 "enable": false 00:19:37.958 } 00:19:37.958 }, 00:19:37.958 { 00:19:37.958 "method": "bdev_malloc_create", 00:19:37.958 "params": { 00:19:37.958 "name": "malloc0", 00:19:37.958 "num_blocks": 8192, 00:19:37.959 "block_size": 4096, 00:19:37.959 "physical_block_size": 4096, 00:19:37.959 "uuid": "cad95d2d-1677-47e5-a4b2-88ecdb5f0078", 00:19:37.959 "optimal_io_boundary": 0 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "bdev_wait_for_examine" 00:19:37.959 } 00:19:37.959 ] 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "subsystem": "nbd", 00:19:37.959 "config": [] 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "subsystem": "scheduler", 00:19:37.959 "config": [ 00:19:37.959 { 00:19:37.959 "method": "framework_set_scheduler", 00:19:37.959 "params": { 00:19:37.959 "name": "static" 00:19:37.959 } 00:19:37.959 } 00:19:37.959 ] 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "subsystem": "nvmf", 00:19:37.959 "config": [ 00:19:37.959 { 00:19:37.959 "method": "nvmf_set_config", 00:19:37.959 "params": { 00:19:37.959 "discovery_filter": "match_any", 00:19:37.959 "admin_cmd_passthru": { 00:19:37.959 "identify_ctrlr": false 00:19:37.959 } 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_set_max_subsystems", 00:19:37.959 "params": { 00:19:37.959 "max_subsystems": 1024 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_set_crdt", 00:19:37.959 "params": { 00:19:37.959 "crdt1": 0, 00:19:37.959 "crdt2": 0, 00:19:37.959 "crdt3": 0 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_create_transport", 00:19:37.959 "params": { 00:19:37.959 "trtype": "TCP", 00:19:37.959 "max_queue_depth": 128, 00:19:37.959 "max_io_qpairs_per_ctrlr": 127, 00:19:37.959 "in_capsule_data_size": 4096, 00:19:37.959 "max_io_size": 131072, 00:19:37.959 "io_unit_size": 131072, 00:19:37.959 "max_aq_depth": 128, 00:19:37.959 "num_shared_buffers": 511, 00:19:37.959 "buf_cache_size": 4294967295, 00:19:37.959 "dif_insert_or_strip": false, 00:19:37.959 "zcopy": false, 00:19:37.959 "c2h_success": false, 00:19:37.959 "sock_priority": 0, 00:19:37.959 "abort_timeout_sec": 1, 00:19:37.959 "ack_timeout": 0, 00:19:37.959 "data_wr_pool_size": 0 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_create_subsystem", 00:19:37.959 "params": { 00:19:37.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.959 "allow_any_host": false, 00:19:37.959 "serial_number": "SPDK00000000000001", 00:19:37.959 "model_number": "SPDK bdev Controller", 00:19:37.959 "max_namespaces": 10, 00:19:37.959 "min_cntlid": 1, 00:19:37.959 "max_cntlid": 65519, 00:19:37.959 "ana_reporting": false 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_subsystem_add_host", 00:19:37.959 "params": { 00:19:37.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.959 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.959 "psk": "/tmp/tmp.XJLU5LmErb" 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_subsystem_add_ns", 00:19:37.959 "params": { 00:19:37.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.959 "namespace": { 00:19:37.959 "nsid": 1, 00:19:37.959 "bdev_name": "malloc0", 00:19:37.959 "nguid": "CAD95D2D167747E5A4B288ECDB5F0078", 00:19:37.959 "uuid": "cad95d2d-1677-47e5-a4b2-88ecdb5f0078", 00:19:37.959 "no_auto_visible": false 00:19:37.959 } 00:19:37.959 } 00:19:37.959 }, 00:19:37.959 { 00:19:37.959 "method": "nvmf_subsystem_add_listener", 00:19:37.959 "params": { 00:19:37.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.959 "listen_address": { 00:19:37.959 "trtype": "TCP", 00:19:37.959 "adrfam": "IPv4", 00:19:37.959 "traddr": "10.0.0.2", 00:19:37.959 "trsvcid": "4420" 00:19:37.959 }, 00:19:37.959 "secure_channel": true 00:19:37.959 } 00:19:37.959 } 00:19:37.959 ] 00:19:37.959 } 00:19:37.959 ] 00:19:37.959 }' 00:19:37.959 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:38.272 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:38.272 "subsystems": [ 00:19:38.272 { 00:19:38.272 "subsystem": "keyring", 00:19:38.272 "config": [] 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "subsystem": "iobuf", 00:19:38.272 "config": [ 00:19:38.272 { 00:19:38.272 "method": "iobuf_set_options", 00:19:38.272 "params": { 00:19:38.272 "small_pool_count": 8192, 00:19:38.272 "large_pool_count": 1024, 00:19:38.272 "small_bufsize": 8192, 00:19:38.272 "large_bufsize": 135168 00:19:38.272 } 00:19:38.272 } 00:19:38.272 ] 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "subsystem": "sock", 00:19:38.272 "config": [ 00:19:38.272 { 00:19:38.272 "method": "sock_set_default_impl", 00:19:38.272 "params": { 00:19:38.272 "impl_name": "posix" 00:19:38.272 } 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "method": "sock_impl_set_options", 00:19:38.272 "params": { 00:19:38.272 "impl_name": "ssl", 00:19:38.272 "recv_buf_size": 4096, 00:19:38.272 "send_buf_size": 4096, 00:19:38.272 "enable_recv_pipe": true, 00:19:38.272 "enable_quickack": false, 00:19:38.272 "enable_placement_id": 0, 00:19:38.272 "enable_zerocopy_send_server": true, 00:19:38.272 "enable_zerocopy_send_client": false, 00:19:38.272 "zerocopy_threshold": 0, 00:19:38.272 "tls_version": 0, 00:19:38.272 "enable_ktls": false 00:19:38.272 } 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "method": "sock_impl_set_options", 00:19:38.272 "params": { 00:19:38.272 "impl_name": "posix", 00:19:38.272 "recv_buf_size": 2097152, 00:19:38.272 "send_buf_size": 2097152, 00:19:38.272 "enable_recv_pipe": true, 00:19:38.272 "enable_quickack": false, 00:19:38.272 "enable_placement_id": 0, 00:19:38.272 "enable_zerocopy_send_server": true, 00:19:38.272 "enable_zerocopy_send_client": false, 00:19:38.272 "zerocopy_threshold": 0, 00:19:38.272 "tls_version": 0, 00:19:38.272 "enable_ktls": false 00:19:38.272 } 00:19:38.272 } 00:19:38.272 ] 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "subsystem": "vmd", 00:19:38.272 "config": [] 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "subsystem": "accel", 00:19:38.272 "config": [ 00:19:38.272 { 00:19:38.272 "method": "accel_set_options", 00:19:38.272 "params": { 00:19:38.272 "small_cache_size": 128, 00:19:38.272 "large_cache_size": 16, 00:19:38.272 "task_count": 2048, 00:19:38.272 "sequence_count": 2048, 00:19:38.272 "buf_count": 2048 00:19:38.272 } 00:19:38.272 } 00:19:38.272 ] 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "subsystem": "bdev", 00:19:38.272 "config": [ 00:19:38.272 { 00:19:38.272 "method": "bdev_set_options", 00:19:38.272 "params": { 00:19:38.272 "bdev_io_pool_size": 65535, 00:19:38.272 "bdev_io_cache_size": 256, 00:19:38.272 "bdev_auto_examine": true, 00:19:38.272 "iobuf_small_cache_size": 128, 00:19:38.272 "iobuf_large_cache_size": 16 00:19:38.272 } 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "method": "bdev_raid_set_options", 00:19:38.272 "params": { 00:19:38.272 "process_window_size_kb": 1024 00:19:38.272 } 00:19:38.272 }, 00:19:38.272 { 00:19:38.272 "method": "bdev_iscsi_set_options", 00:19:38.272 "params": { 00:19:38.272 "timeout_sec": 30 00:19:38.273 } 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "method": "bdev_nvme_set_options", 00:19:38.273 "params": { 00:19:38.273 "action_on_timeout": "none", 00:19:38.273 "timeout_us": 0, 00:19:38.273 "timeout_admin_us": 0, 00:19:38.273 "keep_alive_timeout_ms": 10000, 00:19:38.273 "arbitration_burst": 0, 00:19:38.273 "low_priority_weight": 0, 00:19:38.273 "medium_priority_weight": 0, 00:19:38.273 "high_priority_weight": 0, 00:19:38.273 "nvme_adminq_poll_period_us": 10000, 00:19:38.273 "nvme_ioq_poll_period_us": 0, 00:19:38.273 "io_queue_requests": 512, 00:19:38.273 "delay_cmd_submit": true, 00:19:38.273 "transport_retry_count": 4, 00:19:38.273 "bdev_retry_count": 3, 00:19:38.273 "transport_ack_timeout": 0, 00:19:38.273 "ctrlr_loss_timeout_sec": 0, 00:19:38.273 "reconnect_delay_sec": 0, 00:19:38.273 "fast_io_fail_timeout_sec": 0, 00:19:38.273 "disable_auto_failback": false, 00:19:38.273 "generate_uuids": false, 00:19:38.273 "transport_tos": 0, 00:19:38.273 "nvme_error_stat": false, 00:19:38.273 "rdma_srq_size": 0, 00:19:38.273 "io_path_stat": false, 00:19:38.273 "allow_accel_sequence": false, 00:19:38.273 "rdma_max_cq_size": 0, 00:19:38.273 "rdma_cm_event_timeout_ms": 0, 00:19:38.273 "dhchap_digests": [ 00:19:38.273 "sha256", 00:19:38.273 "sha384", 00:19:38.273 "sha512" 00:19:38.273 ], 00:19:38.273 "dhchap_dhgroups": [ 00:19:38.273 "null", 00:19:38.273 "ffdhe2048", 00:19:38.273 "ffdhe3072", 00:19:38.273 "ffdhe4096", 00:19:38.273 "ffdhe6144", 00:19:38.273 "ffdhe8192" 00:19:38.273 ] 00:19:38.273 } 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "method": "bdev_nvme_attach_controller", 00:19:38.273 "params": { 00:19:38.273 "name": "TLSTEST", 00:19:38.273 "trtype": "TCP", 00:19:38.273 "adrfam": "IPv4", 00:19:38.273 "traddr": "10.0.0.2", 00:19:38.273 "trsvcid": "4420", 00:19:38.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.273 "prchk_reftag": false, 00:19:38.273 "prchk_guard": false, 00:19:38.273 "ctrlr_loss_timeout_sec": 0, 00:19:38.273 "reconnect_delay_sec": 0, 00:19:38.273 "fast_io_fail_timeout_sec": 0, 00:19:38.273 "psk": "/tmp/tmp.XJLU5LmErb", 00:19:38.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.273 "hdgst": false, 00:19:38.273 "ddgst": false 00:19:38.273 } 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "method": "bdev_nvme_set_hotplug", 00:19:38.273 "params": { 00:19:38.273 "period_us": 100000, 00:19:38.273 "enable": false 00:19:38.273 } 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "method": "bdev_wait_for_examine" 00:19:38.273 } 00:19:38.273 ] 00:19:38.273 }, 00:19:38.273 { 00:19:38.273 "subsystem": "nbd", 00:19:38.273 "config": [] 00:19:38.273 } 00:19:38.273 ] 00:19:38.273 }' 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2360678 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360678 ']' 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360678 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360678 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360678' 00:19:38.273 killing process with pid 2360678 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360678 00:19:38.273 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.273 00:19:38.273 Latency(us) 00:19:38.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.273 =================================================================================================================== 00:19:38.273 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.273 [2024-07-25 14:46:58.522668] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.273 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360678 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2360204 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360204 ']' 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360204 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360204 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360204' 00:19:38.559 killing process with pid 2360204 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360204 00:19:38.559 [2024-07-25 14:46:58.752604] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:38.559 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360204 00:19:38.819 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.819 14:46:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.819 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.819 14:46:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:38.819 "subsystems": [ 00:19:38.819 { 00:19:38.819 "subsystem": "keyring", 00:19:38.819 "config": [] 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "subsystem": "iobuf", 00:19:38.819 "config": [ 00:19:38.819 { 00:19:38.819 "method": "iobuf_set_options", 00:19:38.819 "params": { 00:19:38.819 "small_pool_count": 8192, 00:19:38.819 "large_pool_count": 1024, 00:19:38.819 "small_bufsize": 8192, 00:19:38.819 "large_bufsize": 135168 00:19:38.819 } 00:19:38.819 } 00:19:38.819 ] 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "subsystem": "sock", 00:19:38.819 "config": [ 00:19:38.819 { 00:19:38.819 "method": "sock_set_default_impl", 00:19:38.819 "params": { 00:19:38.819 "impl_name": "posix" 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "method": "sock_impl_set_options", 00:19:38.819 "params": { 00:19:38.819 "impl_name": "ssl", 00:19:38.819 "recv_buf_size": 4096, 00:19:38.819 "send_buf_size": 4096, 00:19:38.819 "enable_recv_pipe": true, 00:19:38.819 "enable_quickack": false, 00:19:38.819 "enable_placement_id": 0, 00:19:38.819 "enable_zerocopy_send_server": true, 00:19:38.819 "enable_zerocopy_send_client": false, 00:19:38.819 "zerocopy_threshold": 0, 00:19:38.819 "tls_version": 0, 00:19:38.819 "enable_ktls": false 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "method": "sock_impl_set_options", 00:19:38.819 "params": { 00:19:38.819 "impl_name": "posix", 00:19:38.819 "recv_buf_size": 2097152, 00:19:38.819 "send_buf_size": 2097152, 00:19:38.819 "enable_recv_pipe": true, 00:19:38.819 "enable_quickack": false, 00:19:38.819 "enable_placement_id": 0, 00:19:38.819 "enable_zerocopy_send_server": true, 00:19:38.819 "enable_zerocopy_send_client": false, 00:19:38.819 "zerocopy_threshold": 0, 00:19:38.819 "tls_version": 0, 00:19:38.819 "enable_ktls": false 00:19:38.819 } 00:19:38.819 } 00:19:38.819 ] 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "subsystem": "vmd", 00:19:38.819 "config": [] 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "subsystem": "accel", 00:19:38.819 "config": [ 00:19:38.819 { 00:19:38.819 "method": "accel_set_options", 00:19:38.819 "params": { 00:19:38.819 "small_cache_size": 128, 00:19:38.819 "large_cache_size": 16, 00:19:38.819 "task_count": 2048, 00:19:38.819 "sequence_count": 2048, 00:19:38.819 "buf_count": 2048 00:19:38.819 } 00:19:38.819 } 00:19:38.819 ] 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "subsystem": "bdev", 00:19:38.819 "config": [ 00:19:38.819 { 00:19:38.819 "method": "bdev_set_options", 00:19:38.819 "params": { 00:19:38.819 "bdev_io_pool_size": 65535, 00:19:38.819 "bdev_io_cache_size": 256, 00:19:38.819 "bdev_auto_examine": true, 00:19:38.819 "iobuf_small_cache_size": 128, 00:19:38.819 "iobuf_large_cache_size": 16 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "method": "bdev_raid_set_options", 00:19:38.819 "params": { 00:19:38.819 "process_window_size_kb": 1024 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "method": "bdev_iscsi_set_options", 00:19:38.819 "params": { 00:19:38.819 "timeout_sec": 30 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "method": "bdev_nvme_set_options", 00:19:38.819 "params": { 00:19:38.819 "action_on_timeout": "none", 00:19:38.819 "timeout_us": 0, 00:19:38.819 "timeout_admin_us": 0, 00:19:38.819 "keep_alive_timeout_ms": 10000, 00:19:38.819 "arbitration_burst": 0, 00:19:38.819 "low_priority_weight": 0, 00:19:38.819 "medium_priority_weight": 0, 00:19:38.819 "high_priority_weight": 0, 00:19:38.819 "nvme_adminq_poll_period_us": 10000, 00:19:38.819 "nvme_ioq_poll_period_us": 0, 00:19:38.819 "io_queue_requests": 0, 00:19:38.819 "delay_cmd_submit": true, 00:19:38.819 "transport_retry_count": 4, 00:19:38.819 "bdev_retry_count": 3, 00:19:38.819 "transport_ack_timeout": 0, 00:19:38.819 "ctrlr_loss_timeout_sec": 0, 00:19:38.819 "reconnect_delay_sec": 0, 00:19:38.819 "fast_io_fail_timeout_sec": 0, 00:19:38.820 "disable_auto_failback": false, 00:19:38.820 "generate_uuids": false, 00:19:38.820 "transport_tos": 0, 00:19:38.820 "nvme_error_stat": false, 00:19:38.820 "rdma_srq_size": 0, 00:19:38.820 "io_path_stat": false, 00:19:38.820 "allow_accel_sequence": false, 00:19:38.820 "rdma_max_cq_size": 0, 00:19:38.820 "rdma_cm_event_timeout_ms": 0, 00:19:38.820 "dhchap_digests": [ 00:19:38.820 "sha256", 00:19:38.820 "sha384", 00:19:38.820 "sha512" 00:19:38.820 ], 00:19:38.820 "dhchap_dhgroups": [ 00:19:38.820 "null", 00:19:38.820 "ffdhe2048", 00:19:38.820 "ffdhe3072", 00:19:38.820 "ffdhe4096", 00:19:38.820 "ffdhe6144", 00:19:38.820 "ffdhe8192" 00:19:38.820 ] 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "bdev_nvme_set_hotplug", 00:19:38.820 "params": { 00:19:38.820 "period_us": 100000, 00:19:38.820 "enable": false 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "bdev_malloc_create", 00:19:38.820 "params": { 00:19:38.820 "name": "malloc0", 00:19:38.820 "num_blocks": 8192, 00:19:38.820 "block_size": 4096, 00:19:38.820 "physical_block_size": 4096, 00:19:38.820 "uuid": "cad95d2d-1677-47e5-a4b2-88ecdb5f0078", 00:19:38.820 "optimal_io_boundary": 0 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "bdev_wait_for_examine" 00:19:38.820 } 00:19:38.820 ] 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "subsystem": "nbd", 00:19:38.820 "config": [] 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "subsystem": "scheduler", 00:19:38.820 "config": [ 00:19:38.820 { 00:19:38.820 "method": "framework_set_scheduler", 00:19:38.820 "params": { 00:19:38.820 "name": "static" 00:19:38.820 } 00:19:38.820 } 00:19:38.820 ] 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "subsystem": "nvmf", 00:19:38.820 "config": [ 00:19:38.820 { 00:19:38.820 "method": "nvmf_set_config", 00:19:38.820 "params": { 00:19:38.820 "discovery_filter": "match_any", 00:19:38.820 "admin_cmd_passthru": { 00:19:38.820 "identify_ctrlr": false 00:19:38.820 } 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_set_max_subsystems", 00:19:38.820 "params": { 00:19:38.820 "max_subsystems": 1024 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_set_crdt", 00:19:38.820 "params": { 00:19:38.820 "crdt1": 0, 00:19:38.820 "crdt2": 0, 00:19:38.820 "crdt3": 0 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_create_transport", 00:19:38.820 "params": { 00:19:38.820 "trtype": "TCP", 00:19:38.820 "max_queue_depth": 128, 00:19:38.820 "max_io_qpairs_per_ctrlr": 127, 00:19:38.820 "in_capsule_data_size": 4096, 00:19:38.820 "max_io_size": 131072, 00:19:38.820 "io_unit_size": 131072, 00:19:38.820 "max_aq_depth": 128, 00:19:38.820 "num_shared_buffers": 511, 00:19:38.820 "buf_cache_size": 4294967295, 00:19:38.820 "dif_insert_or_strip": false, 00:19:38.820 "zcopy": false, 00:19:38.820 "c2h_success": false, 00:19:38.820 "sock_priority": 0, 00:19:38.820 "abort_timeout_sec": 1, 00:19:38.820 "ack_timeout": 0, 00:19:38.820 "data_wr_pool_size": 0 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_create_subsystem", 00:19:38.820 "params": { 00:19:38.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.820 "allow_any_host": false, 00:19:38.820 "serial_number": "SPDK00000000000001", 00:19:38.820 "model_number": "SPDK bdev Controller", 00:19:38.820 "max_namespaces": 10, 00:19:38.820 "min_cntlid": 1, 00:19:38.820 "max_cntlid": 65519, 00:19:38.820 "ana_reporting": false 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_subsystem_add_host", 00:19:38.820 "params": { 00:19:38.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.820 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.820 "psk": "/tmp/tmp.XJLU5LmErb" 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_subsystem_add_ns", 00:19:38.820 "params": { 00:19:38.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.820 "namespace": { 00:19:38.820 "nsid": 1, 00:19:38.820 "bdev_name": "malloc0", 00:19:38.820 "nguid": "CAD95D2D167747E5A4B288ECDB5F0078", 00:19:38.820 "uuid": "cad95d2d-1677-47e5-a4b2-88ecdb5f0078", 00:19:38.820 "no_auto_visible": false 00:19:38.820 } 00:19:38.820 } 00:19:38.820 }, 00:19:38.820 { 00:19:38.820 "method": "nvmf_subsystem_add_listener", 00:19:38.820 "params": { 00:19:38.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.820 "listen_address": { 00:19:38.820 "trtype": "TCP", 00:19:38.820 "adrfam": "IPv4", 00:19:38.820 "traddr": "10.0.0.2", 00:19:38.820 "trsvcid": "4420" 00:19:38.820 }, 00:19:38.820 "secure_channel": true 00:19:38.820 } 00:19:38.820 } 00:19:38.820 ] 00:19:38.820 } 00:19:38.820 ] 00:19:38.820 }' 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2360937 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2360937 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2360937 ']' 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.820 14:46:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 [2024-07-25 14:46:58.993959] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:38.820 [2024-07-25 14:46:58.994003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.820 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.820 [2024-07-25 14:46:59.050168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.080 [2024-07-25 14:46:59.129610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.080 [2024-07-25 14:46:59.129642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.080 [2024-07-25 14:46:59.129649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.080 [2024-07-25 14:46:59.129655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.080 [2024-07-25 14:46:59.129660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.080 [2024-07-25 14:46:59.129728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.080 [2024-07-25 14:46:59.331597] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.080 [2024-07-25 14:46:59.355649] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:39.080 [2024-07-25 14:46:59.371684] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.080 [2024-07-25 14:46:59.371852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2361181 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2361181 /var/tmp/bdevperf.sock 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2361181 ']' 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.651 14:46:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:39.651 "subsystems": [ 00:19:39.651 { 00:19:39.651 "subsystem": "keyring", 00:19:39.651 "config": [] 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "subsystem": "iobuf", 00:19:39.651 "config": [ 00:19:39.651 { 00:19:39.651 "method": "iobuf_set_options", 00:19:39.651 "params": { 00:19:39.651 "small_pool_count": 8192, 00:19:39.651 "large_pool_count": 1024, 00:19:39.651 "small_bufsize": 8192, 00:19:39.651 "large_bufsize": 135168 00:19:39.651 } 00:19:39.651 } 00:19:39.651 ] 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "subsystem": "sock", 00:19:39.651 "config": [ 00:19:39.651 { 00:19:39.651 "method": "sock_set_default_impl", 00:19:39.651 "params": { 00:19:39.651 "impl_name": "posix" 00:19:39.651 } 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "method": "sock_impl_set_options", 00:19:39.651 "params": { 00:19:39.651 "impl_name": "ssl", 00:19:39.651 "recv_buf_size": 4096, 00:19:39.651 "send_buf_size": 4096, 00:19:39.651 "enable_recv_pipe": true, 00:19:39.651 "enable_quickack": false, 00:19:39.651 "enable_placement_id": 0, 00:19:39.651 "enable_zerocopy_send_server": true, 00:19:39.651 "enable_zerocopy_send_client": false, 00:19:39.651 "zerocopy_threshold": 0, 00:19:39.651 "tls_version": 0, 00:19:39.651 "enable_ktls": false 00:19:39.651 } 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "method": "sock_impl_set_options", 00:19:39.651 "params": { 00:19:39.651 "impl_name": "posix", 00:19:39.651 "recv_buf_size": 2097152, 00:19:39.651 "send_buf_size": 2097152, 00:19:39.651 "enable_recv_pipe": true, 00:19:39.651 "enable_quickack": false, 00:19:39.651 "enable_placement_id": 0, 00:19:39.651 "enable_zerocopy_send_server": true, 00:19:39.651 "enable_zerocopy_send_client": false, 00:19:39.651 "zerocopy_threshold": 0, 00:19:39.651 "tls_version": 0, 00:19:39.651 "enable_ktls": false 00:19:39.651 } 00:19:39.651 } 00:19:39.651 ] 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "subsystem": "vmd", 00:19:39.651 "config": [] 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "subsystem": "accel", 00:19:39.651 "config": [ 00:19:39.651 { 00:19:39.651 "method": "accel_set_options", 00:19:39.651 "params": { 00:19:39.651 "small_cache_size": 128, 00:19:39.651 "large_cache_size": 16, 00:19:39.651 "task_count": 2048, 00:19:39.651 "sequence_count": 2048, 00:19:39.651 "buf_count": 2048 00:19:39.651 } 00:19:39.651 } 00:19:39.651 ] 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "subsystem": "bdev", 00:19:39.651 "config": [ 00:19:39.651 { 00:19:39.651 "method": "bdev_set_options", 00:19:39.651 "params": { 00:19:39.651 "bdev_io_pool_size": 65535, 00:19:39.651 "bdev_io_cache_size": 256, 00:19:39.651 "bdev_auto_examine": true, 00:19:39.651 "iobuf_small_cache_size": 128, 00:19:39.651 "iobuf_large_cache_size": 16 00:19:39.651 } 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "method": "bdev_raid_set_options", 00:19:39.651 "params": { 00:19:39.651 "process_window_size_kb": 1024 00:19:39.651 } 00:19:39.651 }, 00:19:39.651 { 00:19:39.651 "method": "bdev_iscsi_set_options", 00:19:39.651 "params": { 00:19:39.651 "timeout_sec": 30 00:19:39.651 } 00:19:39.651 }, 00:19:39.651 { 00:19:39.652 "method": "bdev_nvme_set_options", 00:19:39.652 "params": { 00:19:39.652 "action_on_timeout": "none", 00:19:39.652 "timeout_us": 0, 00:19:39.652 "timeout_admin_us": 0, 00:19:39.652 "keep_alive_timeout_ms": 10000, 00:19:39.652 "arbitration_burst": 0, 00:19:39.652 "low_priority_weight": 0, 00:19:39.652 "medium_priority_weight": 0, 00:19:39.652 "high_priority_weight": 0, 00:19:39.652 "nvme_adminq_poll_period_us": 10000, 00:19:39.652 "nvme_ioq_poll_period_us": 0, 00:19:39.652 "io_queue_requests": 512, 00:19:39.652 "delay_cmd_submit": true, 00:19:39.652 "transport_retry_count": 4, 00:19:39.652 "bdev_retry_count": 3, 00:19:39.652 "transport_ack_timeout": 0, 00:19:39.652 "ctrlr_loss_timeout_sec": 0, 00:19:39.652 "reconnect_delay_sec": 0, 00:19:39.652 "fast_io_fail_timeout_sec": 0, 00:19:39.652 "disable_auto_failback": false, 00:19:39.652 "generate_uuids": false, 00:19:39.652 "transport_tos": 0, 00:19:39.652 "nvme_error_stat": false, 00:19:39.652 "rdma_srq_size": 0, 00:19:39.652 "io_path_stat": false, 00:19:39.652 "allow_accel_sequence": false, 00:19:39.652 "rdma_max_cq_size": 0, 00:19:39.652 "rdma_cm_event_timeout_ms": 0, 00:19:39.652 "dhchap_digests": [ 00:19:39.652 "sha256", 00:19:39.652 "sha384", 00:19:39.652 "sha512" 00:19:39.652 ], 00:19:39.652 "dhchap_dhgroups": [ 00:19:39.652 "null", 00:19:39.652 "ffdhe2048", 00:19:39.652 "ffdhe3072", 00:19:39.652 "ffdhe4096", 00:19:39.652 "ffdhe6144", 00:19:39.652 "ffdhe8192" 00:19:39.652 ] 00:19:39.652 } 00:19:39.652 }, 00:19:39.652 { 00:19:39.652 "method": "bdev_nvme_attach_controller", 00:19:39.652 "params": { 00:19:39.652 "name": "TLSTEST", 00:19:39.652 "trtype": "TCP", 00:19:39.652 "adrfam": "IPv4", 00:19:39.652 "traddr": "10.0.0.2", 00:19:39.652 "trsvcid": "4420", 00:19:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.652 "prchk_reftag": false, 00:19:39.652 "prchk_guard": false, 00:19:39.652 "ctrlr_loss_timeout_sec": 0, 00:19:39.652 "reconnect_delay_sec": 0, 00:19:39.652 "fast_io_fail_timeout_sec": 0, 00:19:39.652 "psk": "/tmp/tmp.XJLU5LmErb", 00:19:39.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.652 "hdgst": false, 00:19:39.652 "ddgst": false 00:19:39.652 } 00:19:39.652 }, 00:19:39.652 { 00:19:39.652 "method": "bdev_nvme_set_hotplug", 00:19:39.652 "params": { 00:19:39.652 "period_us": 100000, 00:19:39.652 "enable": false 00:19:39.652 } 00:19:39.652 }, 00:19:39.652 { 00:19:39.652 "method": "bdev_wait_for_examine" 00:19:39.652 } 00:19:39.652 ] 00:19:39.652 }, 00:19:39.652 { 00:19:39.652 "subsystem": "nbd", 00:19:39.652 "config": [] 00:19:39.652 } 00:19:39.652 ] 00:19:39.652 }' 00:19:39.652 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.652 14:46:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.652 [2024-07-25 14:46:59.876227] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:39.652 [2024-07-25 14:46:59.876275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361181 ] 00:19:39.652 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.652 [2024-07-25 14:46:59.926240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.913 [2024-07-25 14:46:59.999595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.913 [2024-07-25 14:47:00.143030] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.913 [2024-07-25 14:47:00.143114] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:40.483 14:47:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.483 14:47:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.483 14:47:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.483 Running I/O for 10 seconds... 00:19:52.697 00:19:52.697 Latency(us) 00:19:52.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.697 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.697 Verification LBA range: start 0x0 length 0x2000 00:19:52.697 TLSTESTn1 : 10.08 1186.91 4.64 0.00 0.00 107501.27 6439.62 156830.50 00:19:52.697 =================================================================================================================== 00:19:52.697 Total : 1186.91 4.64 0.00 0.00 107501.27 6439.62 156830.50 00:19:52.697 0 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2361181 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2361181 ']' 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2361181 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2361181 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2361181' 00:19:52.697 killing process with pid 2361181 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2361181 00:19:52.697 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.697 00:19:52.697 Latency(us) 00:19:52.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.697 =================================================================================================================== 00:19:52.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.697 [2024-07-25 14:47:10.912104] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:52.697 14:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2361181 00:19:52.697 14:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2360937 00:19:52.697 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2360937 ']' 00:19:52.697 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2360937 00:19:52.697 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.697 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360937 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360937' 00:19:52.698 killing process with pid 2360937 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2360937 00:19:52.698 [2024-07-25 14:47:11.139459] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2360937 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2363022 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2363022 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363022 ']' 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.698 14:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.698 [2024-07-25 14:47:11.383186] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:52.698 [2024-07-25 14:47:11.383231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.698 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.698 [2024-07-25 14:47:11.440620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.698 [2024-07-25 14:47:11.519127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.698 [2024-07-25 14:47:11.519162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.698 [2024-07-25 14:47:11.519169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.698 [2024-07-25 14:47:11.519175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.698 [2024-07-25 14:47:11.519180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.698 [2024-07-25 14:47:11.519196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.XJLU5LmErb 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XJLU5LmErb 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.698 [2024-07-25 14:47:12.382942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.698 [2024-07-25 14:47:12.715800] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.698 [2024-07-25 14:47:12.715989] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.698 malloc0 00:19:52.698 14:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJLU5LmErb 00:19:52.957 [2024-07-25 14:47:13.233484] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2363283 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2363283 /var/tmp/bdevperf.sock 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363283 ']' 00:19:52.957 14:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.237 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.237 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.237 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.237 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.237 14:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.237 [2024-07-25 14:47:13.290978] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:53.237 [2024-07-25 14:47:13.291026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363283 ] 00:19:53.237 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.237 [2024-07-25 14:47:13.345391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.237 [2024-07-25 14:47:13.425442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.813 14:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.813 14:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:53.813 14:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XJLU5LmErb 00:19:54.073 14:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:54.332 [2024-07-25 14:47:14.405313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.332 nvme0n1 00:19:54.332 14:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.332 Running I/O for 1 seconds... 00:19:55.713 00:19:55.713 Latency(us) 00:19:55.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.713 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.713 Verification LBA range: start 0x0 length 0x2000 00:19:55.713 nvme0n1 : 1.09 856.15 3.34 0.00 0.00 145664.93 6724.56 155006.89 00:19:55.713 =================================================================================================================== 00:19:55.713 Total : 856.15 3.34 0.00 0.00 145664.93 6724.56 155006.89 00:19:55.713 0 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2363283 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363283 ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363283 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363283 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363283' 00:19:55.713 killing process with pid 2363283 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363283 00:19:55.713 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.713 00:19:55.713 Latency(us) 00:19:55.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.713 =================================================================================================================== 00:19:55.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363283 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2363022 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363022 ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363022 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363022 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363022' 00:19:55.713 killing process with pid 2363022 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363022 00:19:55.713 [2024-07-25 14:47:15.961867] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.713 14:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363022 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2363772 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2363772 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2363772 ']' 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.973 14:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.973 [2024-07-25 14:47:16.202971] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:55.973 [2024-07-25 14:47:16.203016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.973 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.973 [2024-07-25 14:47:16.257602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.232 [2024-07-25 14:47:16.336424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.232 [2024-07-25 14:47:16.336458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.232 [2024-07-25 14:47:16.336465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.232 [2024-07-25 14:47:16.336471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.232 [2024-07-25 14:47:16.336476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.232 [2024-07-25 14:47:16.336493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.802 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.802 [2024-07-25 14:47:17.047186] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.802 malloc0 00:19:56.802 [2024-07-25 14:47:17.075372] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.802 [2024-07-25 14:47:17.075553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2364003 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2364003 /var/tmp/bdevperf.sock 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2364003 ']' 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.061 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.061 [2024-07-25 14:47:17.150231] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:19:57.061 [2024-07-25 14:47:17.150271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364003 ] 00:19:57.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.061 [2024-07-25 14:47:17.203556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.061 [2024-07-25 14:47:17.276330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.000 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.000 14:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:58.000 14:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XJLU5LmErb 00:19:58.001 14:47:18 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.001 [2024-07-25 14:47:18.264410] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.262 nvme0n1 00:19:58.262 14:47:18 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.262 Running I/O for 1 seconds... 00:19:59.643 00:19:59.643 Latency(us) 00:19:59.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.643 Verification LBA range: start 0x0 length 0x2000 00:19:59.643 nvme0n1 : 1.07 837.92 3.27 0.00 0.00 149600.41 6354.14 175978.41 00:19:59.643 =================================================================================================================== 00:19:59.644 Total : 837.92 3.27 0.00 0.00 149600.41 6354.14 175978.41 00:19:59.644 0 00:19:59.644 14:47:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:59.644 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.644 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.644 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.644 14:47:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:59.644 "subsystems": [ 00:19:59.644 { 00:19:59.644 "subsystem": "keyring", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "keyring_file_add_key", 00:19:59.644 "params": { 00:19:59.644 "name": "key0", 00:19:59.644 "path": "/tmp/tmp.XJLU5LmErb" 00:19:59.644 } 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "iobuf", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "iobuf_set_options", 00:19:59.644 "params": { 00:19:59.644 "small_pool_count": 8192, 00:19:59.644 "large_pool_count": 1024, 00:19:59.644 "small_bufsize": 8192, 00:19:59.644 "large_bufsize": 135168 00:19:59.644 } 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "sock", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "sock_set_default_impl", 00:19:59.644 "params": { 00:19:59.644 "impl_name": "posix" 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "sock_impl_set_options", 00:19:59.644 "params": { 00:19:59.644 "impl_name": "ssl", 00:19:59.644 "recv_buf_size": 4096, 00:19:59.644 "send_buf_size": 4096, 00:19:59.644 "enable_recv_pipe": true, 00:19:59.644 "enable_quickack": false, 00:19:59.644 "enable_placement_id": 0, 00:19:59.644 "enable_zerocopy_send_server": true, 00:19:59.644 "enable_zerocopy_send_client": false, 00:19:59.644 "zerocopy_threshold": 0, 00:19:59.644 "tls_version": 0, 00:19:59.644 "enable_ktls": false 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "sock_impl_set_options", 00:19:59.644 "params": { 00:19:59.644 "impl_name": "posix", 00:19:59.644 "recv_buf_size": 2097152, 00:19:59.644 "send_buf_size": 2097152, 00:19:59.644 "enable_recv_pipe": true, 00:19:59.644 "enable_quickack": false, 00:19:59.644 "enable_placement_id": 0, 00:19:59.644 "enable_zerocopy_send_server": true, 00:19:59.644 "enable_zerocopy_send_client": false, 00:19:59.644 "zerocopy_threshold": 0, 00:19:59.644 "tls_version": 0, 00:19:59.644 "enable_ktls": false 00:19:59.644 } 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "vmd", 00:19:59.644 "config": [] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "accel", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "accel_set_options", 00:19:59.644 "params": { 00:19:59.644 "small_cache_size": 128, 00:19:59.644 "large_cache_size": 16, 00:19:59.644 "task_count": 2048, 00:19:59.644 "sequence_count": 2048, 00:19:59.644 "buf_count": 2048 00:19:59.644 } 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "bdev", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "bdev_set_options", 00:19:59.644 "params": { 00:19:59.644 "bdev_io_pool_size": 65535, 00:19:59.644 "bdev_io_cache_size": 256, 00:19:59.644 "bdev_auto_examine": true, 00:19:59.644 "iobuf_small_cache_size": 128, 00:19:59.644 "iobuf_large_cache_size": 16 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_raid_set_options", 00:19:59.644 "params": { 00:19:59.644 "process_window_size_kb": 1024 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_iscsi_set_options", 00:19:59.644 "params": { 00:19:59.644 "timeout_sec": 30 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_nvme_set_options", 00:19:59.644 "params": { 00:19:59.644 "action_on_timeout": "none", 00:19:59.644 "timeout_us": 0, 00:19:59.644 "timeout_admin_us": 0, 00:19:59.644 "keep_alive_timeout_ms": 10000, 00:19:59.644 "arbitration_burst": 0, 00:19:59.644 "low_priority_weight": 0, 00:19:59.644 "medium_priority_weight": 0, 00:19:59.644 "high_priority_weight": 0, 00:19:59.644 "nvme_adminq_poll_period_us": 10000, 00:19:59.644 "nvme_ioq_poll_period_us": 0, 00:19:59.644 "io_queue_requests": 0, 00:19:59.644 "delay_cmd_submit": true, 00:19:59.644 "transport_retry_count": 4, 00:19:59.644 "bdev_retry_count": 3, 00:19:59.644 "transport_ack_timeout": 0, 00:19:59.644 "ctrlr_loss_timeout_sec": 0, 00:19:59.644 "reconnect_delay_sec": 0, 00:19:59.644 "fast_io_fail_timeout_sec": 0, 00:19:59.644 "disable_auto_failback": false, 00:19:59.644 "generate_uuids": false, 00:19:59.644 "transport_tos": 0, 00:19:59.644 "nvme_error_stat": false, 00:19:59.644 "rdma_srq_size": 0, 00:19:59.644 "io_path_stat": false, 00:19:59.644 "allow_accel_sequence": false, 00:19:59.644 "rdma_max_cq_size": 0, 00:19:59.644 "rdma_cm_event_timeout_ms": 0, 00:19:59.644 "dhchap_digests": [ 00:19:59.644 "sha256", 00:19:59.644 "sha384", 00:19:59.644 "sha512" 00:19:59.644 ], 00:19:59.644 "dhchap_dhgroups": [ 00:19:59.644 "null", 00:19:59.644 "ffdhe2048", 00:19:59.644 "ffdhe3072", 00:19:59.644 "ffdhe4096", 00:19:59.644 "ffdhe6144", 00:19:59.644 "ffdhe8192" 00:19:59.644 ] 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_nvme_set_hotplug", 00:19:59.644 "params": { 00:19:59.644 "period_us": 100000, 00:19:59.644 "enable": false 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_malloc_create", 00:19:59.644 "params": { 00:19:59.644 "name": "malloc0", 00:19:59.644 "num_blocks": 8192, 00:19:59.644 "block_size": 4096, 00:19:59.644 "physical_block_size": 4096, 00:19:59.644 "uuid": "241c4e8e-fbe4-4a8e-bafb-9b4456dbfe68", 00:19:59.644 "optimal_io_boundary": 0 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "bdev_wait_for_examine" 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "nbd", 00:19:59.644 "config": [] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "scheduler", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "framework_set_scheduler", 00:19:59.644 "params": { 00:19:59.644 "name": "static" 00:19:59.644 } 00:19:59.644 } 00:19:59.644 ] 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "subsystem": "nvmf", 00:19:59.644 "config": [ 00:19:59.644 { 00:19:59.644 "method": "nvmf_set_config", 00:19:59.644 "params": { 00:19:59.644 "discovery_filter": "match_any", 00:19:59.644 "admin_cmd_passthru": { 00:19:59.644 "identify_ctrlr": false 00:19:59.644 } 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_set_max_subsystems", 00:19:59.644 "params": { 00:19:59.644 "max_subsystems": 1024 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_set_crdt", 00:19:59.644 "params": { 00:19:59.644 "crdt1": 0, 00:19:59.644 "crdt2": 0, 00:19:59.644 "crdt3": 0 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_create_transport", 00:19:59.644 "params": { 00:19:59.644 "trtype": "TCP", 00:19:59.644 "max_queue_depth": 128, 00:19:59.644 "max_io_qpairs_per_ctrlr": 127, 00:19:59.644 "in_capsule_data_size": 4096, 00:19:59.644 "max_io_size": 131072, 00:19:59.644 "io_unit_size": 131072, 00:19:59.644 "max_aq_depth": 128, 00:19:59.644 "num_shared_buffers": 511, 00:19:59.644 "buf_cache_size": 4294967295, 00:19:59.644 "dif_insert_or_strip": false, 00:19:59.644 "zcopy": false, 00:19:59.644 "c2h_success": false, 00:19:59.644 "sock_priority": 0, 00:19:59.644 "abort_timeout_sec": 1, 00:19:59.644 "ack_timeout": 0, 00:19:59.644 "data_wr_pool_size": 0 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_create_subsystem", 00:19:59.644 "params": { 00:19:59.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.644 "allow_any_host": false, 00:19:59.644 "serial_number": "00000000000000000000", 00:19:59.644 "model_number": "SPDK bdev Controller", 00:19:59.644 "max_namespaces": 32, 00:19:59.644 "min_cntlid": 1, 00:19:59.644 "max_cntlid": 65519, 00:19:59.644 "ana_reporting": false 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_subsystem_add_host", 00:19:59.644 "params": { 00:19:59.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.644 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.644 "psk": "key0" 00:19:59.644 } 00:19:59.644 }, 00:19:59.644 { 00:19:59.644 "method": "nvmf_subsystem_add_ns", 00:19:59.644 "params": { 00:19:59.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.645 "namespace": { 00:19:59.645 "nsid": 1, 00:19:59.645 "bdev_name": "malloc0", 00:19:59.645 "nguid": "241C4E8EFBE44A8EBAFB9B4456DBFE68", 00:19:59.645 "uuid": "241c4e8e-fbe4-4a8e-bafb-9b4456dbfe68", 00:19:59.645 "no_auto_visible": false 00:19:59.645 } 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "nvmf_subsystem_add_listener", 00:19:59.645 "params": { 00:19:59.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.645 "listen_address": { 00:19:59.645 "trtype": "TCP", 00:19:59.645 "adrfam": "IPv4", 00:19:59.645 "traddr": "10.0.0.2", 00:19:59.645 "trsvcid": "4420" 00:19:59.645 }, 00:19:59.645 "secure_channel": false, 00:19:59.645 "sock_impl": "ssl" 00:19:59.645 } 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }' 00:19:59.645 14:47:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:59.645 14:47:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:59.645 "subsystems": [ 00:19:59.645 { 00:19:59.645 "subsystem": "keyring", 00:19:59.645 "config": [ 00:19:59.645 { 00:19:59.645 "method": "keyring_file_add_key", 00:19:59.645 "params": { 00:19:59.645 "name": "key0", 00:19:59.645 "path": "/tmp/tmp.XJLU5LmErb" 00:19:59.645 } 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "iobuf", 00:19:59.645 "config": [ 00:19:59.645 { 00:19:59.645 "method": "iobuf_set_options", 00:19:59.645 "params": { 00:19:59.645 "small_pool_count": 8192, 00:19:59.645 "large_pool_count": 1024, 00:19:59.645 "small_bufsize": 8192, 00:19:59.645 "large_bufsize": 135168 00:19:59.645 } 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "sock", 00:19:59.645 "config": [ 00:19:59.645 { 00:19:59.645 "method": "sock_set_default_impl", 00:19:59.645 "params": { 00:19:59.645 "impl_name": "posix" 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "sock_impl_set_options", 00:19:59.645 "params": { 00:19:59.645 "impl_name": "ssl", 00:19:59.645 "recv_buf_size": 4096, 00:19:59.645 "send_buf_size": 4096, 00:19:59.645 "enable_recv_pipe": true, 00:19:59.645 "enable_quickack": false, 00:19:59.645 "enable_placement_id": 0, 00:19:59.645 "enable_zerocopy_send_server": true, 00:19:59.645 "enable_zerocopy_send_client": false, 00:19:59.645 "zerocopy_threshold": 0, 00:19:59.645 "tls_version": 0, 00:19:59.645 "enable_ktls": false 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "sock_impl_set_options", 00:19:59.645 "params": { 00:19:59.645 "impl_name": "posix", 00:19:59.645 "recv_buf_size": 2097152, 00:19:59.645 "send_buf_size": 2097152, 00:19:59.645 "enable_recv_pipe": true, 00:19:59.645 "enable_quickack": false, 00:19:59.645 "enable_placement_id": 0, 00:19:59.645 "enable_zerocopy_send_server": true, 00:19:59.645 "enable_zerocopy_send_client": false, 00:19:59.645 "zerocopy_threshold": 0, 00:19:59.645 "tls_version": 0, 00:19:59.645 "enable_ktls": false 00:19:59.645 } 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "vmd", 00:19:59.645 "config": [] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "accel", 00:19:59.645 "config": [ 00:19:59.645 { 00:19:59.645 "method": "accel_set_options", 00:19:59.645 "params": { 00:19:59.645 "small_cache_size": 128, 00:19:59.645 "large_cache_size": 16, 00:19:59.645 "task_count": 2048, 00:19:59.645 "sequence_count": 2048, 00:19:59.645 "buf_count": 2048 00:19:59.645 } 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "bdev", 00:19:59.645 "config": [ 00:19:59.645 { 00:19:59.645 "method": "bdev_set_options", 00:19:59.645 "params": { 00:19:59.645 "bdev_io_pool_size": 65535, 00:19:59.645 "bdev_io_cache_size": 256, 00:19:59.645 "bdev_auto_examine": true, 00:19:59.645 "iobuf_small_cache_size": 128, 00:19:59.645 "iobuf_large_cache_size": 16 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_raid_set_options", 00:19:59.645 "params": { 00:19:59.645 "process_window_size_kb": 1024 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_iscsi_set_options", 00:19:59.645 "params": { 00:19:59.645 "timeout_sec": 30 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_nvme_set_options", 00:19:59.645 "params": { 00:19:59.645 "action_on_timeout": "none", 00:19:59.645 "timeout_us": 0, 00:19:59.645 "timeout_admin_us": 0, 00:19:59.645 "keep_alive_timeout_ms": 10000, 00:19:59.645 "arbitration_burst": 0, 00:19:59.645 "low_priority_weight": 0, 00:19:59.645 "medium_priority_weight": 0, 00:19:59.645 "high_priority_weight": 0, 00:19:59.645 "nvme_adminq_poll_period_us": 10000, 00:19:59.645 "nvme_ioq_poll_period_us": 0, 00:19:59.645 "io_queue_requests": 512, 00:19:59.645 "delay_cmd_submit": true, 00:19:59.645 "transport_retry_count": 4, 00:19:59.645 "bdev_retry_count": 3, 00:19:59.645 "transport_ack_timeout": 0, 00:19:59.645 "ctrlr_loss_timeout_sec": 0, 00:19:59.645 "reconnect_delay_sec": 0, 00:19:59.645 "fast_io_fail_timeout_sec": 0, 00:19:59.645 "disable_auto_failback": false, 00:19:59.645 "generate_uuids": false, 00:19:59.645 "transport_tos": 0, 00:19:59.645 "nvme_error_stat": false, 00:19:59.645 "rdma_srq_size": 0, 00:19:59.645 "io_path_stat": false, 00:19:59.645 "allow_accel_sequence": false, 00:19:59.645 "rdma_max_cq_size": 0, 00:19:59.645 "rdma_cm_event_timeout_ms": 0, 00:19:59.645 "dhchap_digests": [ 00:19:59.645 "sha256", 00:19:59.645 "sha384", 00:19:59.645 "sha512" 00:19:59.645 ], 00:19:59.645 "dhchap_dhgroups": [ 00:19:59.645 "null", 00:19:59.645 "ffdhe2048", 00:19:59.645 "ffdhe3072", 00:19:59.645 "ffdhe4096", 00:19:59.645 "ffdhe6144", 00:19:59.645 "ffdhe8192" 00:19:59.645 ] 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_nvme_attach_controller", 00:19:59.645 "params": { 00:19:59.645 "name": "nvme0", 00:19:59.645 "trtype": "TCP", 00:19:59.645 "adrfam": "IPv4", 00:19:59.645 "traddr": "10.0.0.2", 00:19:59.645 "trsvcid": "4420", 00:19:59.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.645 "prchk_reftag": false, 00:19:59.645 "prchk_guard": false, 00:19:59.645 "ctrlr_loss_timeout_sec": 0, 00:19:59.645 "reconnect_delay_sec": 0, 00:19:59.645 "fast_io_fail_timeout_sec": 0, 00:19:59.645 "psk": "key0", 00:19:59.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.645 "hdgst": false, 00:19:59.645 "ddgst": false 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_nvme_set_hotplug", 00:19:59.645 "params": { 00:19:59.645 "period_us": 100000, 00:19:59.645 "enable": false 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_enable_histogram", 00:19:59.645 "params": { 00:19:59.645 "name": "nvme0n1", 00:19:59.645 "enable": true 00:19:59.645 } 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "method": "bdev_wait_for_examine" 00:19:59.645 } 00:19:59.645 ] 00:19:59.645 }, 00:19:59.645 { 00:19:59.645 "subsystem": "nbd", 00:19:59.645 "config": [] 00:19:59.646 } 00:19:59.646 ] 00:19:59.646 }' 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2364003 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2364003 ']' 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2364003 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.646 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364003 00:19:59.906 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.906 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.906 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364003' 00:19:59.906 killing process with pid 2364003 00:19:59.906 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2364003 00:19:59.906 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.906 00:19:59.906 Latency(us) 00:19:59.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.906 =================================================================================================================== 00:19:59.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.906 14:47:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2364003 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2363772 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2363772 ']' 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2363772 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363772 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363772' 00:19:59.906 killing process with pid 2363772 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2363772 00:19:59.906 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2363772 00:20:00.167 14:47:20 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:00.167 14:47:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.167 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.167 14:47:20 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:00.167 "subsystems": [ 00:20:00.167 { 00:20:00.167 "subsystem": "keyring", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "keyring_file_add_key", 00:20:00.167 "params": { 00:20:00.167 "name": "key0", 00:20:00.167 "path": "/tmp/tmp.XJLU5LmErb" 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "iobuf", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "iobuf_set_options", 00:20:00.167 "params": { 00:20:00.167 "small_pool_count": 8192, 00:20:00.167 "large_pool_count": 1024, 00:20:00.167 "small_bufsize": 8192, 00:20:00.167 "large_bufsize": 135168 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "sock", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "sock_set_default_impl", 00:20:00.167 "params": { 00:20:00.167 "impl_name": "posix" 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "sock_impl_set_options", 00:20:00.167 "params": { 00:20:00.167 "impl_name": "ssl", 00:20:00.167 "recv_buf_size": 4096, 00:20:00.167 "send_buf_size": 4096, 00:20:00.167 "enable_recv_pipe": true, 00:20:00.167 "enable_quickack": false, 00:20:00.167 "enable_placement_id": 0, 00:20:00.167 "enable_zerocopy_send_server": true, 00:20:00.167 "enable_zerocopy_send_client": false, 00:20:00.167 "zerocopy_threshold": 0, 00:20:00.167 "tls_version": 0, 00:20:00.167 "enable_ktls": false 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "sock_impl_set_options", 00:20:00.167 "params": { 00:20:00.167 "impl_name": "posix", 00:20:00.167 "recv_buf_size": 2097152, 00:20:00.167 "send_buf_size": 2097152, 00:20:00.167 "enable_recv_pipe": true, 00:20:00.167 "enable_quickack": false, 00:20:00.167 "enable_placement_id": 0, 00:20:00.167 "enable_zerocopy_send_server": true, 00:20:00.167 "enable_zerocopy_send_client": false, 00:20:00.167 "zerocopy_threshold": 0, 00:20:00.167 "tls_version": 0, 00:20:00.167 "enable_ktls": false 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "vmd", 00:20:00.167 "config": [] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "accel", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "accel_set_options", 00:20:00.167 "params": { 00:20:00.167 "small_cache_size": 128, 00:20:00.167 "large_cache_size": 16, 00:20:00.167 "task_count": 2048, 00:20:00.167 "sequence_count": 2048, 00:20:00.167 "buf_count": 2048 00:20:00.167 } 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "bdev", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "bdev_set_options", 00:20:00.167 "params": { 00:20:00.167 "bdev_io_pool_size": 65535, 00:20:00.167 "bdev_io_cache_size": 256, 00:20:00.167 "bdev_auto_examine": true, 00:20:00.167 "iobuf_small_cache_size": 128, 00:20:00.167 "iobuf_large_cache_size": 16 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_raid_set_options", 00:20:00.167 "params": { 00:20:00.167 "process_window_size_kb": 1024 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_iscsi_set_options", 00:20:00.167 "params": { 00:20:00.167 "timeout_sec": 30 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_nvme_set_options", 00:20:00.167 "params": { 00:20:00.167 "action_on_timeout": "none", 00:20:00.167 "timeout_us": 0, 00:20:00.167 "timeout_admin_us": 0, 00:20:00.167 "keep_alive_timeout_ms": 10000, 00:20:00.167 "arbitration_burst": 0, 00:20:00.167 "low_priority_weight": 0, 00:20:00.167 "medium_priority_weight": 0, 00:20:00.167 "high_priority_weight": 0, 00:20:00.167 "nvme_adminq_poll_period_us": 10000, 00:20:00.167 "nvme_ioq_poll_period_us": 0, 00:20:00.167 "io_queue_requests": 0, 00:20:00.167 "delay_cmd_submit": true, 00:20:00.167 "transport_retry_count": 4, 00:20:00.167 "bdev_retry_count": 3, 00:20:00.167 "transport_ack_timeout": 0, 00:20:00.167 "ctrlr_loss_timeout_sec": 0, 00:20:00.167 "reconnect_delay_sec": 0, 00:20:00.167 "fast_io_fail_timeout_sec": 0, 00:20:00.167 "disable_auto_failback": false, 00:20:00.167 "generate_uuids": false, 00:20:00.167 "transport_tos": 0, 00:20:00.167 "nvme_error_stat": false, 00:20:00.167 "rdma_srq_size": 0, 00:20:00.167 "io_path_stat": false, 00:20:00.167 "allow_accel_sequence": false, 00:20:00.167 "rdma_max_cq_size": 0, 00:20:00.167 "rdma_cm_event_timeout_ms": 0, 00:20:00.167 "dhchap_digests": [ 00:20:00.167 "sha256", 00:20:00.167 "sha384", 00:20:00.167 "sha512" 00:20:00.167 ], 00:20:00.167 "dhchap_dhgroups": [ 00:20:00.167 "null", 00:20:00.167 "ffdhe2048", 00:20:00.167 "ffdhe3072", 00:20:00.167 "ffdhe4096", 00:20:00.167 "ffdhe6144", 00:20:00.167 "ffdhe8192" 00:20:00.167 ] 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_nvme_set_hotplug", 00:20:00.167 "params": { 00:20:00.167 "period_us": 100000, 00:20:00.167 "enable": false 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_malloc_create", 00:20:00.167 "params": { 00:20:00.167 "name": "malloc0", 00:20:00.167 "num_blocks": 8192, 00:20:00.167 "block_size": 4096, 00:20:00.167 "physical_block_size": 4096, 00:20:00.167 "uuid": "241c4e8e-fbe4-4a8e-bafb-9b4456dbfe68", 00:20:00.167 "optimal_io_boundary": 0 00:20:00.167 } 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "method": "bdev_wait_for_examine" 00:20:00.167 } 00:20:00.167 ] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "nbd", 00:20:00.167 "config": [] 00:20:00.167 }, 00:20:00.167 { 00:20:00.167 "subsystem": "scheduler", 00:20:00.167 "config": [ 00:20:00.167 { 00:20:00.167 "method": "framework_set_scheduler", 00:20:00.167 "params": { 00:20:00.167 "name": "static" 00:20:00.168 } 00:20:00.168 } 00:20:00.168 ] 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "subsystem": "nvmf", 00:20:00.168 "config": [ 00:20:00.168 { 00:20:00.168 "method": "nvmf_set_config", 00:20:00.168 "params": { 00:20:00.168 "discovery_filter": "match_any", 00:20:00.168 "admin_cmd_passthru": { 00:20:00.168 "identify_ctrlr": false 00:20:00.168 } 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_set_max_subsystems", 00:20:00.168 "params": { 00:20:00.168 "max_subsystems": 1024 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_set_crdt", 00:20:00.168 "params": { 00:20:00.168 "crdt1": 0, 00:20:00.168 "crdt2": 0, 00:20:00.168 "crdt3": 0 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_create_transport", 00:20:00.168 "params": { 00:20:00.168 "trtype": "TCP", 00:20:00.168 "max_queue_depth": 128, 00:20:00.168 "max_io_qpairs_per_ctrlr": 127, 00:20:00.168 "in_capsule_data_size": 4096, 00:20:00.168 "max_io_size": 131072, 00:20:00.168 "io_unit_size": 131072, 00:20:00.168 "max_aq_depth": 128, 00:20:00.168 "num_shared_buffers": 511, 00:20:00.168 "buf_cache_size": 4294967295, 00:20:00.168 "dif_insert_or_strip": false, 00:20:00.168 "zcopy": false, 00:20:00.168 "c2h_success": false, 00:20:00.168 "sock_priority": 0, 00:20:00.168 "abort_timeout_sec": 1, 00:20:00.168 "ack_timeout": 0, 00:20:00.168 "data_wr_pool_size": 0 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_create_subsystem", 00:20:00.168 "params": { 00:20:00.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.168 "allow_any_host": false, 00:20:00.168 "serial_number": "00000000000000000000", 00:20:00.168 "model_number": "SPDK bdev Controller", 00:20:00.168 "max_namespaces": 32, 00:20:00.168 "min_cntlid": 1, 00:20:00.168 "max_cntlid": 65519, 00:20:00.168 "ana_reporting": false 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_subsystem_add_host", 00:20:00.168 "params": { 00:20:00.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.168 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.168 "psk": "key0" 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_subsystem_add_ns", 00:20:00.168 "params": { 00:20:00.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.168 "namespace": { 00:20:00.168 "nsid": 1, 00:20:00.168 "bdev_name": "malloc0", 00:20:00.168 "nguid": "241C4E8EFBE44A8EBAFB9B4456DBFE68", 00:20:00.168 "uuid": "241c4e8e-fbe4-4a8e-bafb-9b4456dbfe68", 00:20:00.168 "no_auto_visible": false 00:20:00.168 } 00:20:00.168 } 00:20:00.168 }, 00:20:00.168 { 00:20:00.168 "method": "nvmf_subsystem_add_listener", 00:20:00.168 "params": { 00:20:00.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.168 "listen_address": { 00:20:00.168 "trtype": "TCP", 00:20:00.168 "adrfam": "IPv4", 00:20:00.168 "traddr": "10.0.0.2", 00:20:00.168 "trsvcid": "4420" 00:20:00.168 }, 00:20:00.168 "secure_channel": false, 00:20:00.168 "sock_impl": "ssl" 00:20:00.168 } 00:20:00.168 } 00:20:00.168 ] 00:20:00.168 } 00:20:00.168 ] 00:20:00.168 }' 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2364490 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2364490 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2364490 ']' 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.168 14:47:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.168 [2024-07-25 14:47:20.418913] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:20:00.168 [2024-07-25 14:47:20.418961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.168 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.458 [2024-07-25 14:47:20.475011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.458 [2024-07-25 14:47:20.555274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.458 [2024-07-25 14:47:20.555309] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.458 [2024-07-25 14:47:20.555316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.458 [2024-07-25 14:47:20.555322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.458 [2024-07-25 14:47:20.555328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.458 [2024-07-25 14:47:20.555376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.717 [2024-07-25 14:47:20.768144] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.717 [2024-07-25 14:47:20.818924] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.717 [2024-07-25 14:47:20.819099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.977 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2364736 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2364736 /var/tmp/bdevperf.sock 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2364736 ']' 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:00.978 "subsystems": [ 00:20:00.978 { 00:20:00.978 "subsystem": "keyring", 00:20:00.978 "config": [ 00:20:00.978 { 00:20:00.978 "method": "keyring_file_add_key", 00:20:00.978 "params": { 00:20:00.978 "name": "key0", 00:20:00.978 "path": "/tmp/tmp.XJLU5LmErb" 00:20:00.978 } 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "iobuf", 00:20:00.978 "config": [ 00:20:00.978 { 00:20:00.978 "method": "iobuf_set_options", 00:20:00.978 "params": { 00:20:00.978 "small_pool_count": 8192, 00:20:00.978 "large_pool_count": 1024, 00:20:00.978 "small_bufsize": 8192, 00:20:00.978 "large_bufsize": 135168 00:20:00.978 } 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "sock", 00:20:00.978 "config": [ 00:20:00.978 { 00:20:00.978 "method": "sock_set_default_impl", 00:20:00.978 "params": { 00:20:00.978 "impl_name": "posix" 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "sock_impl_set_options", 00:20:00.978 "params": { 00:20:00.978 "impl_name": "ssl", 00:20:00.978 "recv_buf_size": 4096, 00:20:00.978 "send_buf_size": 4096, 00:20:00.978 "enable_recv_pipe": true, 00:20:00.978 "enable_quickack": false, 00:20:00.978 "enable_placement_id": 0, 00:20:00.978 "enable_zerocopy_send_server": true, 00:20:00.978 "enable_zerocopy_send_client": false, 00:20:00.978 "zerocopy_threshold": 0, 00:20:00.978 "tls_version": 0, 00:20:00.978 "enable_ktls": false 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "sock_impl_set_options", 00:20:00.978 "params": { 00:20:00.978 "impl_name": "posix", 00:20:00.978 "recv_buf_size": 2097152, 00:20:00.978 "send_buf_size": 2097152, 00:20:00.978 "enable_recv_pipe": true, 00:20:00.978 "enable_quickack": false, 00:20:00.978 "enable_placement_id": 0, 00:20:00.978 "enable_zerocopy_send_server": true, 00:20:00.978 "enable_zerocopy_send_client": false, 00:20:00.978 "zerocopy_threshold": 0, 00:20:00.978 "tls_version": 0, 00:20:00.978 "enable_ktls": false 00:20:00.978 } 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "vmd", 00:20:00.978 "config": [] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "accel", 00:20:00.978 "config": [ 00:20:00.978 { 00:20:00.978 "method": "accel_set_options", 00:20:00.978 "params": { 00:20:00.978 "small_cache_size": 128, 00:20:00.978 "large_cache_size": 16, 00:20:00.978 "task_count": 2048, 00:20:00.978 "sequence_count": 2048, 00:20:00.978 "buf_count": 2048 00:20:00.978 } 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "bdev", 00:20:00.978 "config": [ 00:20:00.978 { 00:20:00.978 "method": "bdev_set_options", 00:20:00.978 "params": { 00:20:00.978 "bdev_io_pool_size": 65535, 00:20:00.978 "bdev_io_cache_size": 256, 00:20:00.978 "bdev_auto_examine": true, 00:20:00.978 "iobuf_small_cache_size": 128, 00:20:00.978 "iobuf_large_cache_size": 16 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_raid_set_options", 00:20:00.978 "params": { 00:20:00.978 "process_window_size_kb": 1024 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_iscsi_set_options", 00:20:00.978 "params": { 00:20:00.978 "timeout_sec": 30 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_nvme_set_options", 00:20:00.978 "params": { 00:20:00.978 "action_on_timeout": "none", 00:20:00.978 "timeout_us": 0, 00:20:00.978 "timeout_admin_us": 0, 00:20:00.978 "keep_alive_timeout_ms": 10000, 00:20:00.978 "arbitration_burst": 0, 00:20:00.978 "low_priority_weight": 0, 00:20:00.978 "medium_priority_weight": 0, 00:20:00.978 "high_priority_weight": 0, 00:20:00.978 "nvme_adminq_poll_period_us": 10000, 00:20:00.978 "nvme_ioq_poll_period_us": 0, 00:20:00.978 "io_queue_requests": 512, 00:20:00.978 "delay_cmd_submit": true, 00:20:00.978 "transport_retry_count": 4, 00:20:00.978 "bdev_retry_count": 3, 00:20:00.978 "transport_ack_timeout": 0, 00:20:00.978 "ctrlr_loss_timeout_sec": 0, 00:20:00.978 "reconnect_delay_sec": 0, 00:20:00.978 "fast_io_fail_timeout_sec": 0, 00:20:00.978 "disable_auto_failback": false, 00:20:00.978 "generate_uuids": false, 00:20:00.978 "transport_tos": 0, 00:20:00.978 "nvme_error_stat": false, 00:20:00.978 "rdma_srq_size": 0, 00:20:00.978 "io_path_stat": false, 00:20:00.978 "allow_accel_sequence": false, 00:20:00.978 "rdma_max_cq_size": 0, 00:20:00.978 "rdma_cm_event_timeout_ms": 0, 00:20:00.978 "dhchap_digests": [ 00:20:00.978 "sha256", 00:20:00.978 "sha384", 00:20:00.978 "sha512" 00:20:00.978 ], 00:20:00.978 "dhchap_dhgroups": [ 00:20:00.978 "null", 00:20:00.978 "ffdhe2048", 00:20:00.978 "ffdhe3072", 00:20:00.978 "ffdhe4096", 00:20:00.978 "ffdhe6144", 00:20:00.978 "ffdhe8192" 00:20:00.978 ] 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_nvme_attach_controller", 00:20:00.978 "params": { 00:20:00.978 "name": "nvme0", 00:20:00.978 "trtype": "TCP", 00:20:00.978 "adrfam": "IPv4", 00:20:00.978 "traddr": "10.0.0.2", 00:20:00.978 "trsvcid": "4420", 00:20:00.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.978 "prchk_reftag": false, 00:20:00.978 "prchk_guard": false, 00:20:00.978 "ctrlr_loss_timeout_sec": 0, 00:20:00.978 "reconnect_delay_sec": 0, 00:20:00.978 "fast_io_fail_timeout_sec": 0, 00:20:00.978 "psk": "key0", 00:20:00.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.978 "hdgst": false, 00:20:00.978 "ddgst": false 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_nvme_set_hotplug", 00:20:00.978 "params": { 00:20:00.978 "period_us": 100000, 00:20:00.978 "enable": false 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_enable_histogram", 00:20:00.978 "params": { 00:20:00.978 "name": "nvme0n1", 00:20:00.978 "enable": true 00:20:00.978 } 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "method": "bdev_wait_for_examine" 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }, 00:20:00.978 { 00:20:00.978 "subsystem": "nbd", 00:20:00.978 "config": [] 00:20:00.978 } 00:20:00.978 ] 00:20:00.978 }' 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.978 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.979 14:47:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.238 [2024-07-25 14:47:21.298356] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:20:01.238 [2024-07-25 14:47:21.298403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364736 ] 00:20:01.238 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.238 [2024-07-25 14:47:21.351411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.238 [2024-07-25 14:47:21.430495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.497 [2024-07-25 14:47:21.580954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.066 14:47:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.325 Running I/O for 1 seconds... 00:20:03.260 00:20:03.260 Latency(us) 00:20:03.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.260 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.260 Verification LBA range: start 0x0 length 0x2000 00:20:03.260 nvme0n1 : 1.10 889.95 3.48 0.00 0.00 139428.68 7265.95 162301.33 00:20:03.260 =================================================================================================================== 00:20:03.260 Total : 889.95 3.48 0.00 0.00 139428.68 7265.95 162301.33 00:20:03.260 0 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:03.260 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.260 nvmf_trace.0 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2364736 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2364736 ']' 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2364736 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364736 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364736' 00:20:03.519 killing process with pid 2364736 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2364736 00:20:03.519 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.519 00:20:03.519 Latency(us) 00:20:03.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.519 =================================================================================================================== 00:20:03.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2364736 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.519 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.519 rmmod nvme_tcp 00:20:03.519 rmmod nvme_fabrics 00:20:03.779 rmmod nvme_keyring 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2364490 ']' 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2364490 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2364490 ']' 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2364490 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364490 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364490' 00:20:03.779 killing process with pid 2364490 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2364490 00:20:03.779 14:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2364490 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.779 14:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.323 14:47:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.323 14:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.CdM1fzK8Bl /tmp/tmp.uOmXyIJmWq /tmp/tmp.XJLU5LmErb 00:20:06.323 00:20:06.323 real 1m24.645s 00:20:06.323 user 2m13.611s 00:20:06.323 sys 0m25.709s 00:20:06.323 14:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.323 14:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.323 ************************************ 00:20:06.323 END TEST nvmf_tls 00:20:06.323 ************************************ 00:20:06.323 14:47:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:06.323 14:47:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.323 14:47:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.323 14:47:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.323 14:47:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.323 ************************************ 00:20:06.323 START TEST nvmf_fips 00:20:06.323 ************************************ 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.323 * Looking for test storage... 00:20:06.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.323 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:06.324 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:06.325 Error setting digest 00:20:06.325 00421A48537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:06.325 00421A48537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.325 14:47:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.613 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.614 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.614 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:20:11.614 00:20:11.614 --- 10.0.0.2 ping statistics --- 00:20:11.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.614 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:20:11.614 00:20:11.614 --- 10.0.0.1 ping statistics --- 00:20:11.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.614 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2368526 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2368526 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2368526 ']' 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.614 14:47:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.614 [2024-07-25 14:47:31.736336] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:20:11.614 [2024-07-25 14:47:31.736387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.614 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.614 [2024-07-25 14:47:31.794179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.614 [2024-07-25 14:47:31.870056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.614 [2024-07-25 14:47:31.870091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.614 [2024-07-25 14:47:31.870098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.614 [2024-07-25 14:47:31.870104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.614 [2024-07-25 14:47:31.870109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.614 [2024-07-25 14:47:31.870127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.636 [2024-07-25 14:47:32.709904] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.636 [2024-07-25 14:47:32.725916] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.636 [2024-07-25 14:47:32.726090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.636 [2024-07-25 14:47:32.754106] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:12.636 malloc0 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2368773 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2368773 /var/tmp/bdevperf.sock 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2368773 ']' 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.636 14:47:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.636 [2024-07-25 14:47:32.836176] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:20:12.636 [2024-07-25 14:47:32.836228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368773 ] 00:20:12.636 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.636 [2024-07-25 14:47:32.886790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.895 [2024-07-25 14:47:32.959652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.463 14:47:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.463 14:47:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:13.463 14:47:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.722 [2024-07-25 14:47:33.769190] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.722 [2024-07-25 14:47:33.769290] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.722 TLSTESTn1 00:20:13.722 14:47:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.722 Running I/O for 10 seconds... 00:20:25.982 00:20:25.982 Latency(us) 00:20:25.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.982 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.982 Verification LBA range: start 0x0 length 0x2000 00:20:25.982 TLSTESTn1 : 10.09 1188.81 4.64 0.00 0.00 107289.60 6382.64 173242.99 00:20:25.982 =================================================================================================================== 00:20:25.982 Total : 1188.81 4.64 0.00 0.00 107289.60 6382.64 173242.99 00:20:25.982 0 00:20:25.982 14:47:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:25.982 14:47:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:25.982 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:25.982 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:25.982 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.983 nvmf_trace.0 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2368773 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2368773 ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2368773 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2368773 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2368773' 00:20:25.983 killing process with pid 2368773 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2368773 00:20:25.983 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.983 00:20:25.983 Latency(us) 00:20:25.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.983 =================================================================================================================== 00:20:25.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.983 [2024-07-25 14:47:44.211421] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2368773 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.983 rmmod nvme_tcp 00:20:25.983 rmmod nvme_fabrics 00:20:25.983 rmmod nvme_keyring 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2368526 ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2368526 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2368526 ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2368526 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2368526 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2368526' 00:20:25.983 killing process with pid 2368526 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2368526 00:20:25.983 [2024-07-25 14:47:44.490245] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2368526 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.983 14:47:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.552 14:47:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.552 14:47:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:26.552 00:20:26.552 real 0m20.550s 00:20:26.552 user 0m23.363s 00:20:26.552 sys 0m8.033s 00:20:26.552 14:47:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.552 14:47:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.552 ************************************ 00:20:26.552 END TEST nvmf_fips 00:20:26.552 ************************************ 00:20:26.552 14:47:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.552 14:47:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:26.552 14:47:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:26.552 14:47:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:26.552 14:47:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:26.552 14:47:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.552 14:47:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.830 14:47:51 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:31.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:31.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:31.831 Found net devices under 0000:86:00.0: cvl_0_0 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:31.831 Found net devices under 0000:86:00.1: cvl_0_1 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:31.831 14:47:51 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:31.831 14:47:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:31.831 14:47:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.831 14:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.831 ************************************ 00:20:31.831 START TEST nvmf_perf_adq 00:20:31.831 ************************************ 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:31.831 * Looking for test storage... 00:20:31.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.831 14:47:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.111 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.112 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.112 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:37.112 14:47:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:37.432 14:47:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:39.361 14:47:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.639 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.640 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.640 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:44.640 00:20:44.640 --- 10.0.0.2 ping statistics --- 00:20:44.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.640 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:20:44.640 00:20:44.640 --- 10.0.0.1 ping statistics --- 00:20:44.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.640 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2378571 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2378571 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2378571 ']' 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.640 14:48:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.640 [2024-07-25 14:48:04.523834] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:20:44.640 [2024-07-25 14:48:04.523875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.640 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.640 [2024-07-25 14:48:04.579835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.640 [2024-07-25 14:48:04.661696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.640 [2024-07-25 14:48:04.661734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.640 [2024-07-25 14:48:04.661741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.640 [2024-07-25 14:48:04.661747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.640 [2024-07-25 14:48:04.661752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.640 [2024-07-25 14:48:04.661786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.640 [2024-07-25 14:48:04.661883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.640 [2024-07-25 14:48:04.661958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.640 [2024-07-25 14:48:04.661960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.209 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.468 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.468 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:45.468 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.468 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.468 [2024-07-25 14:48:05.523912] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.468 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 Malloc1 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.469 [2024-07-25 14:48:05.575540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2378703 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:45.469 14:48:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:45.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:47.379 "tick_rate": 2300000000, 00:20:47.379 "poll_groups": [ 00:20:47.379 { 00:20:47.379 "name": "nvmf_tgt_poll_group_000", 00:20:47.379 "admin_qpairs": 1, 00:20:47.379 "io_qpairs": 1, 00:20:47.379 "current_admin_qpairs": 1, 00:20:47.379 "current_io_qpairs": 1, 00:20:47.379 "pending_bdev_io": 0, 00:20:47.379 "completed_nvme_io": 20057, 00:20:47.379 "transports": [ 00:20:47.379 { 00:20:47.379 "trtype": "TCP" 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 }, 00:20:47.379 { 00:20:47.379 "name": "nvmf_tgt_poll_group_001", 00:20:47.379 "admin_qpairs": 0, 00:20:47.379 "io_qpairs": 1, 00:20:47.379 "current_admin_qpairs": 0, 00:20:47.379 "current_io_qpairs": 1, 00:20:47.379 "pending_bdev_io": 0, 00:20:47.379 "completed_nvme_io": 19455, 00:20:47.379 "transports": [ 00:20:47.379 { 00:20:47.379 "trtype": "TCP" 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 }, 00:20:47.379 { 00:20:47.379 "name": "nvmf_tgt_poll_group_002", 00:20:47.379 "admin_qpairs": 0, 00:20:47.379 "io_qpairs": 1, 00:20:47.379 "current_admin_qpairs": 0, 00:20:47.379 "current_io_qpairs": 1, 00:20:47.379 "pending_bdev_io": 0, 00:20:47.379 "completed_nvme_io": 19946, 00:20:47.379 "transports": [ 00:20:47.379 { 00:20:47.379 "trtype": "TCP" 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 }, 00:20:47.379 { 00:20:47.379 "name": "nvmf_tgt_poll_group_003", 00:20:47.379 "admin_qpairs": 0, 00:20:47.379 "io_qpairs": 1, 00:20:47.379 "current_admin_qpairs": 0, 00:20:47.379 "current_io_qpairs": 1, 00:20:47.379 "pending_bdev_io": 0, 00:20:47.379 "completed_nvme_io": 20036, 00:20:47.379 "transports": [ 00:20:47.379 { 00:20:47.379 "trtype": "TCP" 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 }' 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:47.379 14:48:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2378703 00:20:55.504 Initializing NVMe Controllers 00:20:55.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.504 Initialization complete. Launching workers. 00:20:55.504 ======================================================== 00:20:55.504 Latency(us) 00:20:55.504 Device Information : IOPS MiB/s Average min max 00:20:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10549.79 41.21 6067.32 1635.85 12845.06 00:20:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10460.19 40.86 6118.71 1618.68 11291.92 00:20:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10630.59 41.53 6020.82 1714.79 14838.20 00:20:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10666.09 41.66 6018.75 1698.96 48128.70 00:20:55.504 ======================================================== 00:20:55.504 Total : 42306.67 165.26 6056.10 1618.68 48128.70 00:20:55.504 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.504 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.504 rmmod nvme_tcp 00:20:55.764 rmmod nvme_fabrics 00:20:55.764 rmmod nvme_keyring 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2378571 ']' 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2378571 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2378571 ']' 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2378571 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2378571 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2378571' 00:20:55.764 killing process with pid 2378571 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2378571 00:20:55.764 14:48:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2378571 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.024 14:48:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.934 14:48:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.934 14:48:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:57.934 14:48:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:59.315 14:48:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:01.225 14:48:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.506 14:48:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.506 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:06.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:06.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:06.507 Found net devices under 0000:86:00.0: cvl_0_0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:06.507 Found net devices under 0000:86:00.1: cvl_0_1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:06.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:21:06.507 00:21:06.507 --- 10.0.0.2 ping statistics --- 00:21:06.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.507 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:21:06.507 00:21:06.507 --- 10.0.0.1 ping statistics --- 00:21:06.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.507 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:06.507 net.core.busy_poll = 1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:06.507 net.core.busy_read = 1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2382839 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2382839 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2382839 ']' 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.507 14:48:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 [2024-07-25 14:48:26.603265] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:06.508 [2024-07-25 14:48:26.603311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.508 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.508 [2024-07-25 14:48:26.662281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.508 [2024-07-25 14:48:26.734354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.508 [2024-07-25 14:48:26.734396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.508 [2024-07-25 14:48:26.734403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.508 [2024-07-25 14:48:26.734408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.508 [2024-07-25 14:48:26.734413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.508 [2024-07-25 14:48:26.734462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.508 [2024-07-25 14:48:26.734560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.508 [2024-07-25 14:48:26.734625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.508 [2024-07-25 14:48:26.734626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 [2024-07-25 14:48:27.596665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 Malloc1 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 [2024-07-25 14:48:27.640571] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2383038 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:07.480 14:48:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:07.480 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.392 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:09.392 14:48:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.392 14:48:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.392 14:48:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.392 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:09.392 "tick_rate": 2300000000, 00:21:09.392 "poll_groups": [ 00:21:09.392 { 00:21:09.392 "name": "nvmf_tgt_poll_group_000", 00:21:09.392 "admin_qpairs": 1, 00:21:09.392 "io_qpairs": 2, 00:21:09.392 "current_admin_qpairs": 1, 00:21:09.392 "current_io_qpairs": 2, 00:21:09.392 "pending_bdev_io": 0, 00:21:09.392 "completed_nvme_io": 26757, 00:21:09.392 "transports": [ 00:21:09.392 { 00:21:09.392 "trtype": "TCP" 00:21:09.392 } 00:21:09.392 ] 00:21:09.392 }, 00:21:09.392 { 00:21:09.392 "name": "nvmf_tgt_poll_group_001", 00:21:09.392 "admin_qpairs": 0, 00:21:09.392 "io_qpairs": 2, 00:21:09.392 "current_admin_qpairs": 0, 00:21:09.392 "current_io_qpairs": 2, 00:21:09.392 "pending_bdev_io": 0, 00:21:09.392 "completed_nvme_io": 26438, 00:21:09.392 "transports": [ 00:21:09.392 { 00:21:09.392 "trtype": "TCP" 00:21:09.392 } 00:21:09.392 ] 00:21:09.392 }, 00:21:09.392 { 00:21:09.392 "name": "nvmf_tgt_poll_group_002", 00:21:09.392 "admin_qpairs": 0, 00:21:09.392 "io_qpairs": 0, 00:21:09.392 "current_admin_qpairs": 0, 00:21:09.392 "current_io_qpairs": 0, 00:21:09.392 "pending_bdev_io": 0, 00:21:09.392 "completed_nvme_io": 0, 00:21:09.392 "transports": [ 00:21:09.392 { 00:21:09.392 "trtype": "TCP" 00:21:09.393 } 00:21:09.393 ] 00:21:09.393 }, 00:21:09.393 { 00:21:09.393 "name": "nvmf_tgt_poll_group_003", 00:21:09.393 "admin_qpairs": 0, 00:21:09.393 "io_qpairs": 0, 00:21:09.393 "current_admin_qpairs": 0, 00:21:09.393 "current_io_qpairs": 0, 00:21:09.393 "pending_bdev_io": 0, 00:21:09.393 "completed_nvme_io": 0, 00:21:09.393 "transports": [ 00:21:09.393 { 00:21:09.393 "trtype": "TCP" 00:21:09.393 } 00:21:09.393 ] 00:21:09.393 } 00:21:09.393 ] 00:21:09.393 }' 00:21:09.393 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:09.393 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:09.654 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:09.654 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:09.654 14:48:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2383038 00:21:17.782 Initializing NVMe Controllers 00:21:17.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:17.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:17.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:17.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:17.782 Initialization complete. Launching workers. 00:21:17.782 ======================================================== 00:21:17.782 Latency(us) 00:21:17.782 Device Information : IOPS MiB/s Average min max 00:21:17.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7288.00 28.47 8813.56 1711.13 54446.09 00:21:17.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7032.50 27.47 9101.31 1618.49 54720.76 00:21:17.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7143.60 27.90 8980.08 1696.31 54425.89 00:21:17.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6819.90 26.64 9390.69 1815.01 53517.49 00:21:17.782 ======================================================== 00:21:17.782 Total : 28283.99 110.48 9066.32 1618.49 54720.76 00:21:17.782 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.782 rmmod nvme_tcp 00:21:17.782 rmmod nvme_fabrics 00:21:17.782 rmmod nvme_keyring 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2382839 ']' 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2382839 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2382839 ']' 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2382839 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2382839 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2382839' 00:21:17.782 killing process with pid 2382839 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2382839 00:21:17.782 14:48:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2382839 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.042 14:48:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.342 14:48:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.342 14:48:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:21.342 00:21:21.342 real 0m49.889s 00:21:21.342 user 2m49.304s 00:21:21.342 sys 0m9.490s 00:21:21.342 14:48:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.342 14:48:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.342 ************************************ 00:21:21.342 END TEST nvmf_perf_adq 00:21:21.342 ************************************ 00:21:21.342 14:48:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:21.342 14:48:41 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.342 14:48:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:21.342 14:48:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.342 14:48:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.342 ************************************ 00:21:21.342 START TEST nvmf_shutdown 00:21:21.342 ************************************ 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.342 * Looking for test storage... 00:21:21.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.342 ************************************ 00:21:21.342 START TEST nvmf_shutdown_tc1 00:21:21.342 ************************************ 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.342 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.343 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.343 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.343 14:48:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.633 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:26.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:26.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:26.634 Found net devices under 0000:86:00.0: cvl_0_0 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:26.634 Found net devices under 0000:86:00.1: cvl_0_1 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:21:26.634 00:21:26.634 --- 10.0.0.2 ping statistics --- 00:21:26.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.634 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:21:26.634 00:21:26.634 --- 10.0.0.1 ping statistics --- 00:21:26.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.634 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2388478 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2388478 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2388478 ']' 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.634 14:48:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.634 [2024-07-25 14:48:46.884962] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:26.635 [2024-07-25 14:48:46.885005] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.635 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.895 [2024-07-25 14:48:46.944995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.895 [2024-07-25 14:48:47.026160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.895 [2024-07-25 14:48:47.026195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.895 [2024-07-25 14:48:47.026202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.895 [2024-07-25 14:48:47.026219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.895 [2024-07-25 14:48:47.026225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.895 [2024-07-25 14:48:47.026322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.895 [2024-07-25 14:48:47.026409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.895 [2024-07-25 14:48:47.026509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.895 [2024-07-25 14:48:47.026510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.465 [2024-07-25 14:48:47.742982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.465 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.724 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.725 14:48:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.725 Malloc1 00:21:27.725 [2024-07-25 14:48:47.838622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.725 Malloc2 00:21:27.725 Malloc3 00:21:27.725 Malloc4 00:21:27.725 Malloc5 00:21:27.984 Malloc6 00:21:27.984 Malloc7 00:21:27.984 Malloc8 00:21:27.984 Malloc9 00:21:27.984 Malloc10 00:21:27.984 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2388757 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2388757 /var/tmp/bdevperf.sock 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2388757 ']' 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.985 { 00:21:27.985 "params": { 00:21:27.985 "name": "Nvme$subsystem", 00:21:27.985 "trtype": "$TEST_TRANSPORT", 00:21:27.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.985 "adrfam": "ipv4", 00:21:27.985 "trsvcid": "$NVMF_PORT", 00:21:27.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.985 "hdgst": ${hdgst:-false}, 00:21:27.985 "ddgst": ${ddgst:-false} 00:21:27.985 }, 00:21:27.985 "method": "bdev_nvme_attach_controller" 00:21:27.985 } 00:21:27.985 EOF 00:21:27.985 )") 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.985 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.985 { 00:21:27.985 "params": { 00:21:27.985 "name": "Nvme$subsystem", 00:21:27.985 "trtype": "$TEST_TRANSPORT", 00:21:27.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.985 "adrfam": "ipv4", 00:21:27.985 "trsvcid": "$NVMF_PORT", 00:21:27.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.985 "hdgst": ${hdgst:-false}, 00:21:27.985 "ddgst": ${ddgst:-false} 00:21:27.985 }, 00:21:27.985 "method": "bdev_nvme_attach_controller" 00:21:27.985 } 00:21:27.985 EOF 00:21:27.985 )") 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.245 [2024-07-25 14:48:48.310128] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:28.245 [2024-07-25 14:48:48.310179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.245 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.245 { 00:21:28.245 "params": { 00:21:28.245 "name": "Nvme$subsystem", 00:21:28.245 "trtype": "$TEST_TRANSPORT", 00:21:28.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.245 "adrfam": "ipv4", 00:21:28.245 "trsvcid": "$NVMF_PORT", 00:21:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.245 "hdgst": ${hdgst:-false}, 00:21:28.245 "ddgst": ${ddgst:-false} 00:21:28.245 }, 00:21:28.245 "method": "bdev_nvme_attach_controller" 00:21:28.245 } 00:21:28.245 EOF 00:21:28.245 )") 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.246 { 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme$subsystem", 00:21:28.246 "trtype": "$TEST_TRANSPORT", 00:21:28.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "$NVMF_PORT", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.246 "hdgst": ${hdgst:-false}, 00:21:28.246 "ddgst": ${ddgst:-false} 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 } 00:21:28.246 EOF 00:21:28.246 )") 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.246 { 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme$subsystem", 00:21:28.246 "trtype": "$TEST_TRANSPORT", 00:21:28.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "$NVMF_PORT", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.246 "hdgst": ${hdgst:-false}, 00:21:28.246 "ddgst": ${ddgst:-false} 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 } 00:21:28.246 EOF 00:21:28.246 )") 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:28.246 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:28.246 14:48:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme1", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme2", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme3", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme4", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme5", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme6", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme7", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme8", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme9", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 },{ 00:21:28.246 "params": { 00:21:28.246 "name": "Nvme10", 00:21:28.246 "trtype": "tcp", 00:21:28.246 "traddr": "10.0.0.2", 00:21:28.246 "adrfam": "ipv4", 00:21:28.246 "trsvcid": "4420", 00:21:28.246 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:28.246 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:28.246 "hdgst": false, 00:21:28.246 "ddgst": false 00:21:28.246 }, 00:21:28.246 "method": "bdev_nvme_attach_controller" 00:21:28.246 }' 00:21:28.246 [2024-07-25 14:48:48.367227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.246 [2024-07-25 14:48:48.442469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2388757 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:29.627 14:48:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:30.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2388757 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2388478 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.567 { 00:21:30.567 "params": { 00:21:30.567 "name": "Nvme$subsystem", 00:21:30.567 "trtype": "$TEST_TRANSPORT", 00:21:30.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.567 "adrfam": "ipv4", 00:21:30.567 "trsvcid": "$NVMF_PORT", 00:21:30.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.567 "hdgst": ${hdgst:-false}, 00:21:30.567 "ddgst": ${ddgst:-false} 00:21:30.567 }, 00:21:30.567 "method": "bdev_nvme_attach_controller" 00:21:30.567 } 00:21:30.567 EOF 00:21:30.567 )") 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.567 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.567 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 [2024-07-25 14:48:50.843677] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:30.568 [2024-07-25 14:48:50.843726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389107 ] 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:30.568 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:30.568 { 00:21:30.568 "params": { 00:21:30.568 "name": "Nvme$subsystem", 00:21:30.568 "trtype": "$TEST_TRANSPORT", 00:21:30.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.568 "adrfam": "ipv4", 00:21:30.568 "trsvcid": "$NVMF_PORT", 00:21:30.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.568 "hdgst": ${hdgst:-false}, 00:21:30.568 "ddgst": ${ddgst:-false} 00:21:30.568 }, 00:21:30.568 "method": "bdev_nvme_attach_controller" 00:21:30.568 } 00:21:30.568 EOF 00:21:30.568 )") 00:21:30.828 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:30.828 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:30.828 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:30.828 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.828 14:48:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:30.828 "params": { 00:21:30.828 "name": "Nvme1", 00:21:30.828 "trtype": "tcp", 00:21:30.828 "traddr": "10.0.0.2", 00:21:30.828 "adrfam": "ipv4", 00:21:30.828 "trsvcid": "4420", 00:21:30.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.828 "hdgst": false, 00:21:30.828 "ddgst": false 00:21:30.828 }, 00:21:30.828 "method": "bdev_nvme_attach_controller" 00:21:30.828 },{ 00:21:30.828 "params": { 00:21:30.828 "name": "Nvme2", 00:21:30.828 "trtype": "tcp", 00:21:30.828 "traddr": "10.0.0.2", 00:21:30.828 "adrfam": "ipv4", 00:21:30.828 "trsvcid": "4420", 00:21:30.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:30.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:30.828 "hdgst": false, 00:21:30.828 "ddgst": false 00:21:30.828 }, 00:21:30.828 "method": "bdev_nvme_attach_controller" 00:21:30.828 },{ 00:21:30.828 "params": { 00:21:30.828 "name": "Nvme3", 00:21:30.828 "trtype": "tcp", 00:21:30.828 "traddr": "10.0.0.2", 00:21:30.828 "adrfam": "ipv4", 00:21:30.828 "trsvcid": "4420", 00:21:30.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:30.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:30.828 "hdgst": false, 00:21:30.828 "ddgst": false 00:21:30.828 }, 00:21:30.828 "method": "bdev_nvme_attach_controller" 00:21:30.828 },{ 00:21:30.828 "params": { 00:21:30.828 "name": "Nvme4", 00:21:30.828 "trtype": "tcp", 00:21:30.828 "traddr": "10.0.0.2", 00:21:30.828 "adrfam": "ipv4", 00:21:30.828 "trsvcid": "4420", 00:21:30.828 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:30.828 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:30.828 "hdgst": false, 00:21:30.828 "ddgst": false 00:21:30.828 }, 00:21:30.828 "method": "bdev_nvme_attach_controller" 00:21:30.828 },{ 00:21:30.828 "params": { 00:21:30.828 "name": "Nvme5", 00:21:30.828 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 },{ 00:21:30.829 "params": { 00:21:30.829 "name": "Nvme6", 00:21:30.829 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 },{ 00:21:30.829 "params": { 00:21:30.829 "name": "Nvme7", 00:21:30.829 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 },{ 00:21:30.829 "params": { 00:21:30.829 "name": "Nvme8", 00:21:30.829 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 },{ 00:21:30.829 "params": { 00:21:30.829 "name": "Nvme9", 00:21:30.829 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 },{ 00:21:30.829 "params": { 00:21:30.829 "name": "Nvme10", 00:21:30.829 "trtype": "tcp", 00:21:30.829 "traddr": "10.0.0.2", 00:21:30.829 "adrfam": "ipv4", 00:21:30.829 "trsvcid": "4420", 00:21:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:30.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:30.829 "hdgst": false, 00:21:30.829 "ddgst": false 00:21:30.829 }, 00:21:30.829 "method": "bdev_nvme_attach_controller" 00:21:30.829 }' 00:21:30.829 [2024-07-25 14:48:50.900178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.829 [2024-07-25 14:48:50.974883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.211 Running I/O for 1 seconds... 00:21:33.603 00:21:33.603 Latency(us) 00:21:33.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme1n1 : 1.03 247.56 15.47 0.00 0.00 255771.60 21427.42 222480.47 00:21:33.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme2n1 : 1.16 276.79 17.30 0.00 0.00 225990.21 33964.74 220656.86 00:21:33.603 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme3n1 : 1.14 281.00 17.56 0.00 0.00 219096.29 22567.18 203332.56 00:21:33.603 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme4n1 : 1.16 275.92 17.24 0.00 0.00 220282.88 30317.52 207891.59 00:21:33.603 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme5n1 : 1.12 228.10 14.26 0.00 0.00 262267.10 20971.52 237069.36 00:21:33.603 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme6n1 : 1.14 279.80 17.49 0.00 0.00 210767.52 19375.86 225215.89 00:21:33.603 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme7n1 : 1.22 262.73 16.42 0.00 0.00 214628.80 18122.13 220656.86 00:21:33.603 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme8n1 : 1.17 274.47 17.15 0.00 0.00 208930.01 18805.98 235245.75 00:21:33.603 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme9n1 : 1.14 224.36 14.02 0.00 0.00 250732.19 22339.23 238892.97 00:21:33.603 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:33.603 Verification LBA range: start 0x0 length 0x400 00:21:33.603 Nvme10n1 : 1.23 207.76 12.99 0.00 0.00 260097.56 17438.27 302719.33 00:21:33.603 =================================================================================================================== 00:21:33.603 Total : 2558.50 159.91 0.00 0.00 230738.09 17438.27 302719.33 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.603 rmmod nvme_tcp 00:21:33.603 rmmod nvme_fabrics 00:21:33.603 rmmod nvme_keyring 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2388478 ']' 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2388478 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2388478 ']' 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2388478 00:21:33.603 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388478 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388478' 00:21:33.863 killing process with pid 2388478 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2388478 00:21:33.863 14:48:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2388478 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.121 14:48:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.746 00:21:36.746 real 0m14.920s 00:21:36.746 user 0m34.420s 00:21:36.746 sys 0m5.468s 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.746 ************************************ 00:21:36.746 END TEST nvmf_shutdown_tc1 00:21:36.746 ************************************ 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:36.746 ************************************ 00:21:36.746 START TEST nvmf_shutdown_tc2 00:21:36.746 ************************************ 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.746 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.747 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.747 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:36.747 00:21:36.747 --- 10.0.0.2 ping statistics --- 00:21:36.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.747 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:36.747 00:21:36.747 --- 10.0.0.1 ping statistics --- 00:21:36.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.747 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2390267 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2390267 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2390267 ']' 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.747 14:48:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.747 [2024-07-25 14:48:56.826380] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:36.747 [2024-07-25 14:48:56.826424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.747 [2024-07-25 14:48:56.884106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.747 [2024-07-25 14:48:56.955672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.747 [2024-07-25 14:48:56.955713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.747 [2024-07-25 14:48:56.955720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.747 [2024-07-25 14:48:56.955726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.747 [2024-07-25 14:48:56.955731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.747 [2024-07-25 14:48:56.955840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.747 [2024-07-25 14:48:56.955948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.747 [2024-07-25 14:48:56.956039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.747 [2024-07-25 14:48:56.956040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.695 [2024-07-25 14:48:57.673150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.695 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.696 14:48:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.696 Malloc1 00:21:37.696 [2024-07-25 14:48:57.768868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.696 Malloc2 00:21:37.696 Malloc3 00:21:37.696 Malloc4 00:21:37.696 Malloc5 00:21:37.696 Malloc6 00:21:37.955 Malloc7 00:21:37.955 Malloc8 00:21:37.955 Malloc9 00:21:37.955 Malloc10 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2390549 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2390549 /var/tmp/bdevperf.sock 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2390549 ']' 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:37.955 [2024-07-25 14:48:58.240778] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:37.955 [2024-07-25 14:48:58.240827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390549 ] 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:37.955 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:37.955 { 00:21:37.955 "params": { 00:21:37.955 "name": "Nvme$subsystem", 00:21:37.955 "trtype": "$TEST_TRANSPORT", 00:21:37.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.955 "adrfam": "ipv4", 00:21:37.955 "trsvcid": "$NVMF_PORT", 00:21:37.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.955 "hdgst": ${hdgst:-false}, 00:21:37.955 "ddgst": ${ddgst:-false} 00:21:37.955 }, 00:21:37.955 "method": "bdev_nvme_attach_controller" 00:21:37.955 } 00:21:37.955 EOF 00:21:37.955 )") 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.214 { 00:21:38.214 "params": { 00:21:38.214 "name": "Nvme$subsystem", 00:21:38.214 "trtype": "$TEST_TRANSPORT", 00:21:38.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.214 "adrfam": "ipv4", 00:21:38.214 "trsvcid": "$NVMF_PORT", 00:21:38.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.214 "hdgst": ${hdgst:-false}, 00:21:38.214 "ddgst": ${ddgst:-false} 00:21:38.214 }, 00:21:38.214 "method": "bdev_nvme_attach_controller" 00:21:38.214 } 00:21:38.214 EOF 00:21:38.214 )") 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.214 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.214 { 00:21:38.214 "params": { 00:21:38.214 "name": "Nvme$subsystem", 00:21:38.214 "trtype": "$TEST_TRANSPORT", 00:21:38.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "$NVMF_PORT", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.215 "hdgst": ${hdgst:-false}, 00:21:38.215 "ddgst": ${ddgst:-false} 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 } 00:21:38.215 EOF 00:21:38.215 )") 00:21:38.215 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:38.215 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.215 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:38.215 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:38.215 14:48:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme1", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme2", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme3", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme4", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme5", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme6", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme7", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme8", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme9", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 },{ 00:21:38.215 "params": { 00:21:38.215 "name": "Nvme10", 00:21:38.215 "trtype": "tcp", 00:21:38.215 "traddr": "10.0.0.2", 00:21:38.215 "adrfam": "ipv4", 00:21:38.215 "trsvcid": "4420", 00:21:38.215 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:38.215 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:38.215 "hdgst": false, 00:21:38.215 "ddgst": false 00:21:38.215 }, 00:21:38.215 "method": "bdev_nvme_attach_controller" 00:21:38.215 }' 00:21:38.215 [2024-07-25 14:48:58.297262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.215 [2024-07-25 14:48:58.370376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.595 Running I/O for 10 seconds... 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:39.595 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.855 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:39.855 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:39.855 14:48:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:40.115 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.375 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2390549 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2390549 ']' 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2390549 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390549 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390549' 00:21:40.376 killing process with pid 2390549 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2390549 00:21:40.376 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2390549 00:21:40.376 Received shutdown signal, test time was about 0.954405 seconds 00:21:40.376 00:21:40.376 Latency(us) 00:21:40.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme1n1 : 0.93 276.61 17.29 0.00 0.00 228897.17 19261.89 275365.18 00:21:40.376 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme2n1 : 0.89 214.82 13.43 0.00 0.00 289248.24 36472.21 257129.07 00:21:40.376 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme3n1 : 0.88 217.03 13.56 0.00 0.00 280890.03 32141.13 227951.30 00:21:40.376 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme4n1 : 0.90 213.96 13.37 0.00 0.00 280012.06 36244.26 249834.63 00:21:40.376 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme5n1 : 0.92 282.27 17.64 0.00 0.00 208411.71 2621.44 257129.07 00:21:40.376 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme6n1 : 0.87 292.95 18.31 0.00 0.00 196030.55 20857.54 227039.50 00:21:40.376 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme7n1 : 0.91 209.84 13.11 0.00 0.00 270234.19 23478.98 293601.28 00:21:40.376 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme8n1 : 0.88 289.73 18.11 0.00 0.00 190511.64 23592.96 206979.78 00:21:40.376 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme9n1 : 0.95 268.42 16.78 0.00 0.00 194596.29 21541.40 196949.93 00:21:40.376 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.376 Verification LBA range: start 0x0 length 0x400 00:21:40.376 Nvme10n1 : 0.89 215.03 13.44 0.00 0.00 246783.41 31685.23 249834.63 00:21:40.376 =================================================================================================================== 00:21:40.376 Total : 2480.65 155.04 0.00 0.00 233546.14 2621.44 293601.28 00:21:40.636 14:49:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:41.575 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2390267 00:21:41.575 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:41.575 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:41.575 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.835 rmmod nvme_tcp 00:21:41.835 rmmod nvme_fabrics 00:21:41.835 rmmod nvme_keyring 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2390267 ']' 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2390267 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2390267 ']' 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2390267 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390267 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390267' 00:21:41.835 killing process with pid 2390267 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2390267 00:21:41.835 14:49:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2390267 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.095 14:49:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.639 00:21:44.639 real 0m7.952s 00:21:44.639 user 0m24.024s 00:21:44.639 sys 0m1.388s 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.639 ************************************ 00:21:44.639 END TEST nvmf_shutdown_tc2 00:21:44.639 ************************************ 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:44.639 ************************************ 00:21:44.639 START TEST nvmf_shutdown_tc3 00:21:44.639 ************************************ 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:44.639 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.639 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:44.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:44.640 Found net devices under 0000:86:00.0: cvl_0_0 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:44.640 Found net devices under 0000:86:00.1: cvl_0_1 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:21:44.640 00:21:44.640 --- 10.0.0.2 ping statistics --- 00:21:44.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.640 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:21:44.640 00:21:44.640 --- 10.0.0.1 ping statistics --- 00:21:44.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.640 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2391601 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2391601 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2391601 ']' 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.640 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.641 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.641 14:49:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.641 [2024-07-25 14:49:04.793489] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:44.641 [2024-07-25 14:49:04.793534] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.641 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.641 [2024-07-25 14:49:04.850793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.641 [2024-07-25 14:49:04.930179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.641 [2024-07-25 14:49:04.930215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.641 [2024-07-25 14:49:04.930222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.641 [2024-07-25 14:49:04.930228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.641 [2024-07-25 14:49:04.930233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.900 [2024-07-25 14:49:04.930274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.900 [2024-07-25 14:49:04.930360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.900 [2024-07-25 14:49:04.930493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.900 [2024-07-25 14:49:04.930495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.470 [2024-07-25 14:49:05.636888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.470 14:49:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.470 Malloc1 00:21:45.470 [2024-07-25 14:49:05.732790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.470 Malloc2 00:21:45.730 Malloc3 00:21:45.730 Malloc4 00:21:45.730 Malloc5 00:21:45.730 Malloc6 00:21:45.730 Malloc7 00:21:45.730 Malloc8 00:21:45.990 Malloc9 00:21:45.990 Malloc10 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2391881 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2391881 /var/tmp/bdevperf.sock 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2391881 ']' 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.990 { 00:21:45.990 "params": { 00:21:45.990 "name": "Nvme$subsystem", 00:21:45.990 "trtype": "$TEST_TRANSPORT", 00:21:45.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.990 "adrfam": "ipv4", 00:21:45.990 "trsvcid": "$NVMF_PORT", 00:21:45.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.990 "hdgst": ${hdgst:-false}, 00:21:45.990 "ddgst": ${ddgst:-false} 00:21:45.990 }, 00:21:45.990 "method": "bdev_nvme_attach_controller" 00:21:45.990 } 00:21:45.990 EOF 00:21:45.990 )") 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.990 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.990 { 00:21:45.990 "params": { 00:21:45.990 "name": "Nvme$subsystem", 00:21:45.990 "trtype": "$TEST_TRANSPORT", 00:21:45.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.990 "adrfam": "ipv4", 00:21:45.990 "trsvcid": "$NVMF_PORT", 00:21:45.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.990 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 [2024-07-25 14:49:06.202050] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 [2024-07-25 14:49:06.202100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391881 ] 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.991 "hdgst": ${hdgst:-false}, 00:21:45.991 "ddgst": ${ddgst:-false} 00:21:45.991 }, 00:21:45.991 "method": "bdev_nvme_attach_controller" 00:21:45.991 } 00:21:45.991 EOF 00:21:45.991 )") 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.991 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.991 { 00:21:45.991 "params": { 00:21:45.991 "name": "Nvme$subsystem", 00:21:45.991 "trtype": "$TEST_TRANSPORT", 00:21:45.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.991 "adrfam": "ipv4", 00:21:45.991 "trsvcid": "$NVMF_PORT", 00:21:45.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.992 "hdgst": ${hdgst:-false}, 00:21:45.992 "ddgst": ${ddgst:-false} 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 } 00:21:45.992 EOF 00:21:45.992 )") 00:21:45.992 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:45.992 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.992 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:45.992 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:45.992 14:49:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme1", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme2", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme3", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme4", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme5", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme6", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme7", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme8", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme9", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 },{ 00:21:45.992 "params": { 00:21:45.992 "name": "Nvme10", 00:21:45.992 "trtype": "tcp", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "adrfam": "ipv4", 00:21:45.992 "trsvcid": "4420", 00:21:45.992 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:45.992 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:45.992 "hdgst": false, 00:21:45.992 "ddgst": false 00:21:45.992 }, 00:21:45.992 "method": "bdev_nvme_attach_controller" 00:21:45.992 }' 00:21:45.992 [2024-07-25 14:49:06.258342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.253 [2024-07-25 14:49:06.335797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.160 Running I/O for 10 seconds... 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2391601 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2391601 ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2391601 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2391601 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2391601' 00:21:48.744 killing process with pid 2391601 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2391601 00:21:48.744 14:49:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2391601 00:21:48.744 [2024-07-25 14:49:08.811170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85330 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.811212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85330 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.811220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85330 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.811227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85330 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.744 [2024-07-25 14:49:08.812555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.812739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87da0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.813736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb857e0 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.745 [2024-07-25 14:49:08.815572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.815651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86160 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86610 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.816997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.817243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86ac0 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.818212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.818234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.746 [2024-07-25 14:49:08.818242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.818612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb86f70 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.747 [2024-07-25 14:49:08.819532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.819767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.748 [2024-07-25 14:49:08.820484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.820699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb878f0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.826923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.826953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.826963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.826970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.826978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.826985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.826992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb780 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.827036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f6b0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.827125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195c50 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.827203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214cc70 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.827281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.749 [2024-07-25 14:49:08.827336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.749 [2024-07-25 14:49:08.827342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23188d0 is same with the state(5) to be set 00:21:48.749 [2024-07-25 14:49:08.827372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190c50 is same with the state(5) to be set 00:21:48.750 [2024-07-25 14:49:08.827448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2320610 is same with the state(5) to be set 00:21:48.750 [2024-07-25 14:49:08.827524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eaa60 is same with the state(5) to be set 00:21:48.750 [2024-07-25 14:49:08.827606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ec630 is same with the state(5) to be set 00:21:48.750 [2024-07-25 14:49:08.827682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.750 [2024-07-25 14:49:08.827731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b340 is same with the state(5) to be set 00:21:48.750 [2024-07-25 14:49:08.827800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.827990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.827998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.828006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.828014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.828021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.828028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.828034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.750 [2024-07-25 14:49:08.828051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.750 [2024-07-25 14:49:08.828058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.751 [2024-07-25 14:49:08.828619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.751 [2024-07-25 14:49:08.828625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828807] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22865c0 was disconnected and freed. reset controller. 00:21:48.752 [2024-07-25 14:49:08.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.828985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.828992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.829319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.752 [2024-07-25 14:49:08.829326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.752 [2024-07-25 14:49:08.836736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.836989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.836997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.753 [2024-07-25 14:49:08.837241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.753 [2024-07-25 14:49:08.837304] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22879a0 was disconnected and freed. reset controller. 00:21:48.753 [2024-07-25 14:49:08.837342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.837982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.837991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.754 [2024-07-25 14:49:08.838131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.754 [2024-07-25 14:49:08.838139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288e40 is same with the state(5) to be set 00:21:48.755 [2024-07-25 14:49:08.838716] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2288e40 was disconnected and freed. reset controller. 00:21:48.755 [2024-07-25 14:49:08.838880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.838989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.755 [2024-07-25 14:49:08.839140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.755 [2024-07-25 14:49:08.839150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.756 [2024-07-25 14:49:08.839935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.756 [2024-07-25 14:49:08.839944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.839955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.839965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.839976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.839985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.839996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.840192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.840260] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2146c00 was disconnected and freed. reset controller. 00:21:48.757 [2024-07-25 14:49:08.840628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb780 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216f6b0 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195c50 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214cc70 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23188d0 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190c50 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320610 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eaa60 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ec630 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.840779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b340 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.846066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.757 [2024-07-25 14:49:08.846489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:48.757 [2024-07-25 14:49:08.846522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:48.757 [2024-07-25 14:49:08.846536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:48.757 [2024-07-25 14:49:08.847034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.757 [2024-07-25 14:49:08.847058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214cc70 with addr=10.0.0.2, port=4420 00:21:48.757 [2024-07-25 14:49:08.847070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214cc70 is same with the state(5) to be set 00:21:48.757 [2024-07-25 14:49:08.847867] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848140] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848184] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848227] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848262] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848305] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:48.757 [2024-07-25 14:49:08.848759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.757 [2024-07-25 14:49:08.848772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23188d0 with addr=10.0.0.2, port=4420 00:21:48.757 [2024-07-25 14:49:08.848780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23188d0 is same with the state(5) to be set 00:21:48.757 [2024-07-25 14:49:08.849266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.757 [2024-07-25 14:49:08.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2320610 with addr=10.0.0.2, port=4420 00:21:48.757 [2024-07-25 14:49:08.849283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2320610 is same with the state(5) to be set 00:21:48.757 [2024-07-25 14:49:08.849654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.757 [2024-07-25 14:49:08.849664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9b340 with addr=10.0.0.2, port=4420 00:21:48.757 [2024-07-25 14:49:08.849670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b340 is same with the state(5) to be set 00:21:48.757 [2024-07-25 14:49:08.849681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214cc70 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.849777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23188d0 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.849788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320610 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.849796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b340 (9): Bad file descriptor 00:21:48.757 [2024-07-25 14:49:08.849804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.757 [2024-07-25 14:49:08.849811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.757 [2024-07-25 14:49:08.849819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.757 [2024-07-25 14:49:08.849874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.757 [2024-07-25 14:49:08.849883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:48.757 [2024-07-25 14:49:08.849889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:48.757 [2024-07-25 14:49:08.849895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:48.757 [2024-07-25 14:49:08.849908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:48.757 [2024-07-25 14:49:08.849914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:48.757 [2024-07-25 14:49:08.849920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:48.757 [2024-07-25 14:49:08.849930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:48.757 [2024-07-25 14:49:08.849936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:48.757 [2024-07-25 14:49:08.849942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:48.757 [2024-07-25 14:49:08.849970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.757 [2024-07-25 14:49:08.849977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.757 [2024-07-25 14:49:08.849982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.757 [2024-07-25 14:49:08.850704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.757 [2024-07-25 14:49:08.850718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.757 [2024-07-25 14:49:08.850731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.758 [2024-07-25 14:49:08.851289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.758 [2024-07-25 14:49:08.851295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.851607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.851614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a370 is same with the state(5) to be set 00:21:48.759 [2024-07-25 14:49:08.852618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.759 [2024-07-25 14:49:08.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.759 [2024-07-25 14:49:08.852878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.852988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.852996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.760 [2024-07-25 14:49:08.853448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.760 [2024-07-25 14:49:08.853456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.853565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.853572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147f60 is same with the state(5) to be set 00:21:48.761 [2024-07-25 14:49:08.854590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.854987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.854995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.855001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.855009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.855024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.855030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.761 [2024-07-25 14:49:08.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.761 [2024-07-25 14:49:08.855052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.855536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.762 [2024-07-25 14:49:08.855543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2277780 is same with the state(5) to be set 00:21:48.762 [2024-07-25 14:49:08.856545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.762 [2024-07-25 14:49:08.856557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.856986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.763 [2024-07-25 14:49:08.857140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.763 [2024-07-25 14:49:08.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.857502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.857510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278bc0 is same with the state(5) to be set 00:21:48.764 [2024-07-25 14:49:08.858486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.764 [2024-07-25 14:49:08.858694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.764 [2024-07-25 14:49:08.858702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.858990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.858997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.765 [2024-07-25 14:49:08.859277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.765 [2024-07-25 14:49:08.859285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.859438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.859445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279f50 is same with the state(5) to be set 00:21:48.766 [2024-07-25 14:49:08.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.766 [2024-07-25 14:49:08.861509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.766 [2024-07-25 14:49:08.861516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.767 [2024-07-25 14:49:08.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.767 [2024-07-25 14:49:08.861932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.861940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.861947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.861955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.861961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.861969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.861984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.861990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.861999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.862013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.862019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.862028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.862034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.862045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.862052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.862061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.768 [2024-07-25 14:49:08.862068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.768 [2024-07-25 14:49:08.862075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b3d0 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.863788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:48.768 [2024-07-25 14:49:08.863806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:48.768 [2024-07-25 14:49:08.863815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:48.768 [2024-07-25 14:49:08.863825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:48.768 [2024-07-25 14:49:08.863885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.768 [2024-07-25 14:49:08.863900] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.768 [2024-07-25 14:49:08.863965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:48.768 task offset: 24576 on job bdev=Nvme1n1 fails 00:21:48.768 00:21:48.768 Latency(us) 00:21:48.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme1n1 ended in about 0.70 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme1n1 : 0.70 273.59 17.10 91.20 0.00 173172.87 21313.45 204244.37 00:21:48.768 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme2n1 ended in about 0.70 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme2n1 : 0.70 273.12 17.07 91.04 0.00 169524.54 21541.40 186008.26 00:21:48.768 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme3n1 ended in about 0.70 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme3n1 : 0.70 181.76 11.36 90.88 0.00 221242.47 17210.32 235245.75 00:21:48.768 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme4n1 ended in about 0.71 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme4n1 : 0.71 185.26 11.58 84.21 0.00 218549.20 22453.20 205156.17 00:21:48.768 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme5n1 ended in about 0.71 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme5n1 : 0.71 181.42 11.34 90.71 0.00 211081.42 18122.13 232510.33 00:21:48.768 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme6n1 ended in about 0.71 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme6n1 : 0.71 89.58 5.60 89.58 0.00 313343.55 23478.98 271717.95 00:21:48.768 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme7n1 ended in about 0.72 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme7n1 : 0.72 178.66 11.17 89.33 0.00 204206.38 22225.25 221568.67 00:21:48.768 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme8n1 ended in about 0.72 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme8n1 : 0.72 89.09 5.57 89.09 0.00 299543.60 24732.72 342838.76 00:21:48.768 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme9n1 ended in about 0.72 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme9n1 : 0.72 88.85 5.55 88.85 0.00 292642.28 36700.16 262599.90 00:21:48.768 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.768 Job: Nvme10n1 ended in about 0.72 seconds with error 00:21:48.768 Verification LBA range: start 0x0 length 0x400 00:21:48.768 Nvme10n1 : 0.72 182.58 11.41 88.52 0.00 186788.65 10485.76 219745.06 00:21:48.768 =================================================================================================================== 00:21:48.768 Total : 1723.90 107.74 893.40 0.00 217432.33 10485.76 342838.76 00:21:48.768 [2024-07-25 14:49:08.888541] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:48.768 [2024-07-25 14:49:08.888581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:48.768 [2024-07-25 14:49:08.889219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.889237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ec630 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.889253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ec630 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.889761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.889771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2190c50 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.889778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190c50 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.890208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.890218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216f6b0 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.890225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f6b0 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.890641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.890651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195c50 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.890657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195c50 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.892031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.768 [2024-07-25 14:49:08.892049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:48.768 [2024-07-25 14:49:08.892057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:48.768 [2024-07-25 14:49:08.892067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:48.768 [2024-07-25 14:49:08.892566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.892578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb780 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.892586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb780 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.893065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.768 [2024-07-25 14:49:08.893076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eaa60 with addr=10.0.0.2, port=4420 00:21:48.768 [2024-07-25 14:49:08.893083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eaa60 is same with the state(5) to be set 00:21:48.768 [2024-07-25 14:49:08.893094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ec630 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.893105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190c50 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.893114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216f6b0 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.893122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195c50 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.893154] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.769 [2024-07-25 14:49:08.893164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.769 [2024-07-25 14:49:08.893173] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.769 [2024-07-25 14:49:08.893182] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:48.769 [2024-07-25 14:49:08.893692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.769 [2024-07-25 14:49:08.893704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214cc70 with addr=10.0.0.2, port=4420 00:21:48.769 [2024-07-25 14:49:08.893715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214cc70 is same with the state(5) to be set 00:21:48.769 [2024-07-25 14:49:08.894198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.769 [2024-07-25 14:49:08.894209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9b340 with addr=10.0.0.2, port=4420 00:21:48.769 [2024-07-25 14:49:08.894216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b340 is same with the state(5) to be set 00:21:48.769 [2024-07-25 14:49:08.894697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.769 [2024-07-25 14:49:08.894707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2320610 with addr=10.0.0.2, port=4420 00:21:48.769 [2024-07-25 14:49:08.894714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2320610 is same with the state(5) to be set 00:21:48.769 [2024-07-25 14:49:08.895166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.769 [2024-07-25 14:49:08.895177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23188d0 with addr=10.0.0.2, port=4420 00:21:48.769 [2024-07-25 14:49:08.895183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23188d0 is same with the state(5) to be set 00:21:48.769 [2024-07-25 14:49:08.895191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb780 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eaa60 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214cc70 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b340 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320610 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23188d0 (9): Bad file descriptor 00:21:48.769 [2024-07-25 14:49:08.895417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:48.769 [2024-07-25 14:49:08.895554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:48.769 [2024-07-25 14:49:08.895560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:48.769 [2024-07-25 14:49:08.895582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.769 [2024-07-25 14:49:08.895600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.029 14:49:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:49.029 14:49:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2391881 00:21:49.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2391881) - No such process 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.969 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.969 rmmod nvme_tcp 00:21:50.229 rmmod nvme_fabrics 00:21:50.229 rmmod nvme_keyring 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.229 14:49:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.135 00:21:52.135 real 0m7.897s 00:21:52.135 user 0m19.986s 00:21:52.135 sys 0m1.216s 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.135 ************************************ 00:21:52.135 END TEST nvmf_shutdown_tc3 00:21:52.135 ************************************ 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:52.135 00:21:52.135 real 0m31.101s 00:21:52.135 user 1m18.558s 00:21:52.135 sys 0m8.300s 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.135 14:49:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.135 ************************************ 00:21:52.135 END TEST nvmf_shutdown 00:21:52.135 ************************************ 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:52.396 14:49:12 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.396 14:49:12 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.396 14:49:12 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:52.396 14:49:12 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.396 14:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.396 ************************************ 00:21:52.396 START TEST nvmf_multicontroller 00:21:52.396 ************************************ 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:52.396 * Looking for test storage... 00:21:52.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.396 14:49:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.397 14:49:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.743 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.743 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:21:57.743 00:21:57.743 --- 10.0.0.2 ping statistics --- 00:21:57.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.743 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:21:57.743 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:21:57.744 00:21:57.744 --- 10.0.0.1 ping statistics --- 00:21:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.744 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.744 14:49:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2396126 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2396126 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2396126 ']' 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.744 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.004 [2024-07-25 14:49:18.067521] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:21:58.004 [2024-07-25 14:49:18.067569] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.004 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.004 [2024-07-25 14:49:18.124603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.004 [2024-07-25 14:49:18.204711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.004 [2024-07-25 14:49:18.204746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.004 [2024-07-25 14:49:18.204753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.004 [2024-07-25 14:49:18.204762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.004 [2024-07-25 14:49:18.204767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.004 [2024-07-25 14:49:18.204865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.004 [2024-07-25 14:49:18.204885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.004 [2024-07-25 14:49:18.204886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 [2024-07-25 14:49:18.921687] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 Malloc0 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 [2024-07-25 14:49:18.981671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 [2024-07-25 14:49:18.989621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 Malloc1 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2396215 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2396215 /var/tmp/bdevperf.sock 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2396215 ']' 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.943 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.882 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.882 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:59.882 14:49:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:59.882 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.882 14:49:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.882 NVMe0n1 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.882 1 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.882 request: 00:21:59.882 { 00:21:59.882 "name": "NVMe0", 00:21:59.882 "trtype": "tcp", 00:21:59.882 "traddr": "10.0.0.2", 00:21:59.882 "adrfam": "ipv4", 00:21:59.882 "trsvcid": "4420", 00:21:59.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.882 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:59.882 "hostaddr": "10.0.0.2", 00:21:59.882 "hostsvcid": "60000", 00:21:59.882 "prchk_reftag": false, 00:21:59.882 "prchk_guard": false, 00:21:59.882 "hdgst": false, 00:21:59.882 "ddgst": false, 00:21:59.882 "method": "bdev_nvme_attach_controller", 00:21:59.882 "req_id": 1 00:21:59.882 } 00:21:59.882 Got JSON-RPC error response 00:21:59.882 response: 00:21:59.882 { 00:21:59.882 "code": -114, 00:21:59.882 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:59.882 } 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.882 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.883 request: 00:21:59.883 { 00:21:59.883 "name": "NVMe0", 00:21:59.883 "trtype": "tcp", 00:21:59.883 "traddr": "10.0.0.2", 00:21:59.883 "adrfam": "ipv4", 00:21:59.883 "trsvcid": "4420", 00:21:59.883 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.883 "hostaddr": "10.0.0.2", 00:21:59.883 "hostsvcid": "60000", 00:21:59.883 "prchk_reftag": false, 00:21:59.883 "prchk_guard": false, 00:21:59.883 "hdgst": false, 00:21:59.883 "ddgst": false, 00:21:59.883 "method": "bdev_nvme_attach_controller", 00:21:59.883 "req_id": 1 00:21:59.883 } 00:21:59.883 Got JSON-RPC error response 00:21:59.883 response: 00:21:59.883 { 00:21:59.883 "code": -114, 00:21:59.883 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:59.883 } 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.883 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.143 request: 00:22:00.143 { 00:22:00.143 "name": "NVMe0", 00:22:00.143 "trtype": "tcp", 00:22:00.143 "traddr": "10.0.0.2", 00:22:00.143 "adrfam": "ipv4", 00:22:00.143 "trsvcid": "4420", 00:22:00.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.143 "hostaddr": "10.0.0.2", 00:22:00.143 "hostsvcid": "60000", 00:22:00.143 "prchk_reftag": false, 00:22:00.143 "prchk_guard": false, 00:22:00.143 "hdgst": false, 00:22:00.143 "ddgst": false, 00:22:00.143 "multipath": "disable", 00:22:00.143 "method": "bdev_nvme_attach_controller", 00:22:00.143 "req_id": 1 00:22:00.143 } 00:22:00.143 Got JSON-RPC error response 00:22:00.143 response: 00:22:00.143 { 00:22:00.143 "code": -114, 00:22:00.143 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:00.143 } 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.143 request: 00:22:00.143 { 00:22:00.143 "name": "NVMe0", 00:22:00.143 "trtype": "tcp", 00:22:00.143 "traddr": "10.0.0.2", 00:22:00.143 "adrfam": "ipv4", 00:22:00.143 "trsvcid": "4420", 00:22:00.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.143 "hostaddr": "10.0.0.2", 00:22:00.143 "hostsvcid": "60000", 00:22:00.143 "prchk_reftag": false, 00:22:00.143 "prchk_guard": false, 00:22:00.143 "hdgst": false, 00:22:00.143 "ddgst": false, 00:22:00.143 "multipath": "failover", 00:22:00.143 "method": "bdev_nvme_attach_controller", 00:22:00.143 "req_id": 1 00:22:00.143 } 00:22:00.143 Got JSON-RPC error response 00:22:00.143 response: 00:22:00.143 { 00:22:00.143 "code": -114, 00:22:00.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.143 } 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.143 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.143 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.403 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:00.403 14:49:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.785 0 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2396215 ']' 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396215' 00:22:01.785 killing process with pid 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2396215 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:01.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:01.785 [2024-07-25 14:49:19.090517] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:01.785 [2024-07-25 14:49:19.090564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396215 ] 00:22:01.785 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.785 [2024-07-25 14:49:19.144326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.785 [2024-07-25 14:49:19.218344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.785 [2024-07-25 14:49:20.517982] bdev.c:4610:bdev_name_add: *ERROR*: Bdev name b8ab4bdd-24a2-404a-b59b-328271aa0aca already exists 00:22:01.785 [2024-07-25 14:49:20.518009] bdev.c:7719:bdev_register: *ERROR*: Unable to add uuid:b8ab4bdd-24a2-404a-b59b-328271aa0aca alias for bdev NVMe1n1 00:22:01.785 [2024-07-25 14:49:20.518017] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:01.785 Running I/O for 1 seconds... 00:22:01.785 00:22:01.785 Latency(us) 00:22:01.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.785 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:01.785 NVMe0n1 : 1.01 22257.51 86.94 0.00 0.00 5731.77 4017.64 33052.94 00:22:01.785 =================================================================================================================== 00:22:01.785 Total : 22257.51 86.94 0.00 0.00 5731.77 4017.64 33052.94 00:22:01.785 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.785 00:22:01.785 Latency(us) 00:22:01.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.785 =================================================================================================================== 00:22:01.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.785 14:49:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.785 rmmod nvme_tcp 00:22:01.785 rmmod nvme_fabrics 00:22:01.785 rmmod nvme_keyring 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2396126 ']' 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2396126 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2396126 ']' 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2396126 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396126 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:01.785 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396126' 00:22:01.785 killing process with pid 2396126 00:22:01.786 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2396126 00:22:01.786 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2396126 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.045 14:49:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.583 14:49:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.583 00:22:04.583 real 0m11.816s 00:22:04.583 user 0m16.788s 00:22:04.583 sys 0m4.833s 00:22:04.583 14:49:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.583 14:49:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.583 ************************************ 00:22:04.583 END TEST nvmf_multicontroller 00:22:04.583 ************************************ 00:22:04.583 14:49:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:04.583 14:49:24 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:04.583 14:49:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:04.583 14:49:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.583 14:49:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.583 ************************************ 00:22:04.583 START TEST nvmf_aer 00:22:04.583 ************************************ 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:04.583 * Looking for test storage... 00:22:04.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.583 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.584 14:49:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.863 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.864 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.864 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:09.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:09.864 00:22:09.864 --- 10.0.0.2 ping statistics --- 00:22:09.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.864 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:22:09.864 00:22:09.864 --- 10.0.0.1 ping statistics --- 00:22:09.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.864 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2400155 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2400155 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2400155 ']' 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.864 14:49:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.864 [2024-07-25 14:49:29.965527] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:09.864 [2024-07-25 14:49:29.965578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.864 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.864 [2024-07-25 14:49:30.024620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.864 [2024-07-25 14:49:30.115666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.864 [2024-07-25 14:49:30.115705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.864 [2024-07-25 14:49:30.115712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.864 [2024-07-25 14:49:30.115718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.864 [2024-07-25 14:49:30.115724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.864 [2024-07-25 14:49:30.115798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.864 [2024-07-25 14:49:30.115893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.864 [2024-07-25 14:49:30.117058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.864 [2024-07-25 14:49:30.117061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 [2024-07-25 14:49:30.819048] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 Malloc0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 [2024-07-25 14:49:30.870691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.804 [ 00:22:10.804 { 00:22:10.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:10.804 "subtype": "Discovery", 00:22:10.804 "listen_addresses": [], 00:22:10.804 "allow_any_host": true, 00:22:10.804 "hosts": [] 00:22:10.804 }, 00:22:10.804 { 00:22:10.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.804 "subtype": "NVMe", 00:22:10.804 "listen_addresses": [ 00:22:10.804 { 00:22:10.804 "trtype": "TCP", 00:22:10.804 "adrfam": "IPv4", 00:22:10.804 "traddr": "10.0.0.2", 00:22:10.804 "trsvcid": "4420" 00:22:10.804 } 00:22:10.804 ], 00:22:10.804 "allow_any_host": true, 00:22:10.804 "hosts": [], 00:22:10.804 "serial_number": "SPDK00000000000001", 00:22:10.804 "model_number": "SPDK bdev Controller", 00:22:10.804 "max_namespaces": 2, 00:22:10.804 "min_cntlid": 1, 00:22:10.804 "max_cntlid": 65519, 00:22:10.804 "namespaces": [ 00:22:10.804 { 00:22:10.804 "nsid": 1, 00:22:10.804 "bdev_name": "Malloc0", 00:22:10.804 "name": "Malloc0", 00:22:10.804 "nguid": "3167641450A644919B4460863B289091", 00:22:10.804 "uuid": "31676414-50a6-4491-9b44-60863b289091" 00:22:10.804 } 00:22:10.804 ] 00:22:10.804 } 00:22:10.804 ] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2400407 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:10.804 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:10.804 14:49:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:11.064 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 Malloc1 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 [ 00:22:11.065 { 00:22:11.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:11.065 "subtype": "Discovery", 00:22:11.065 "listen_addresses": [], 00:22:11.065 "allow_any_host": true, 00:22:11.065 "hosts": [] 00:22:11.065 }, 00:22:11.065 { 00:22:11.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.065 "subtype": "NVMe", 00:22:11.065 "listen_addresses": [ 00:22:11.065 { 00:22:11.065 "trtype": "TCP", 00:22:11.065 "adrfam": "IPv4", 00:22:11.065 "traddr": "10.0.0.2", 00:22:11.065 "trsvcid": "4420" 00:22:11.065 } 00:22:11.065 ], 00:22:11.065 "allow_any_host": true, 00:22:11.065 "hosts": [], 00:22:11.065 "serial_number": "SPDK00000000000001", 00:22:11.065 "model_number": "SPDK bdev Controller", 00:22:11.065 "max_namespaces": 2, 00:22:11.065 "min_cntlid": 1, 00:22:11.065 "max_cntlid": 65519, 00:22:11.065 "namespaces": [ 00:22:11.065 { 00:22:11.065 "nsid": 1, 00:22:11.065 "bdev_name": "Malloc0", 00:22:11.065 "name": "Malloc0", 00:22:11.065 "nguid": "3167641450A644919B4460863B289091", 00:22:11.065 "uuid": "31676414-50a6-4491-9b44-60863b289091" 00:22:11.065 }, 00:22:11.065 { 00:22:11.065 "nsid": 2, 00:22:11.065 "bdev_name": "Malloc1", 00:22:11.065 "name": "Malloc1", 00:22:11.065 "nguid": "874EEAC614B9433182C170E8F612DBFB", 00:22:11.065 "uuid": "874eeac6-14b9-4331-82c1-70e8f612dbfb" 00:22:11.065 } 00:22:11.065 ] 00:22:11.065 } 00:22:11.065 ] 00:22:11.065 Asynchronous Event Request test 00:22:11.065 Attaching to 10.0.0.2 00:22:11.065 Attached to 10.0.0.2 00:22:11.065 Registering asynchronous event callbacks... 00:22:11.065 Starting namespace attribute notice tests for all controllers... 00:22:11.065 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:11.065 aer_cb - Changed Namespace 00:22:11.065 Cleaning up... 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2400407 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.065 rmmod nvme_tcp 00:22:11.065 rmmod nvme_fabrics 00:22:11.065 rmmod nvme_keyring 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2400155 ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2400155 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2400155 ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2400155 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2400155 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2400155' 00:22:11.065 killing process with pid 2400155 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2400155 00:22:11.065 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2400155 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.325 14:49:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.863 14:49:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.863 00:22:13.863 real 0m9.161s 00:22:13.863 user 0m7.064s 00:22:13.863 sys 0m4.516s 00:22:13.863 14:49:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.863 14:49:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:13.863 ************************************ 00:22:13.863 END TEST nvmf_aer 00:22:13.863 ************************************ 00:22:13.863 14:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.863 14:49:33 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:13.863 14:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.863 14:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.863 14:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.863 ************************************ 00:22:13.863 START TEST nvmf_async_init 00:22:13.863 ************************************ 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:13.863 * Looking for test storage... 00:22:13.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:13.863 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a6b476f2bca8412b95192abd325955ca 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.864 14:49:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.143 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.144 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.144 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.144 14:49:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:22:19.144 00:22:19.144 --- 10.0.0.2 ping statistics --- 00:22:19.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.144 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:22:19.144 00:22:19.144 --- 10.0.0.1 ping statistics --- 00:22:19.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.144 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2403857 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2403857 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2403857 ']' 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.144 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.144 [2024-07-25 14:49:39.163291] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:19.144 [2024-07-25 14:49:39.163337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.144 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.144 [2024-07-25 14:49:39.221676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.144 [2024-07-25 14:49:39.301426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.144 [2024-07-25 14:49:39.301460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.144 [2024-07-25 14:49:39.301467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.144 [2024-07-25 14:49:39.301473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.144 [2024-07-25 14:49:39.301478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.144 [2024-07-25 14:49:39.301515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.714 14:49:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 [2024-07-25 14:49:40.004564] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 null0 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a6b476f2bca8412b95192abd325955ca 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 [2024-07-25 14:49:40.048781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.974 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 nvme0n1 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 [ 00:22:20.233 { 00:22:20.233 "name": "nvme0n1", 00:22:20.233 "aliases": [ 00:22:20.233 "a6b476f2-bca8-412b-9519-2abd325955ca" 00:22:20.233 ], 00:22:20.233 "product_name": "NVMe disk", 00:22:20.233 "block_size": 512, 00:22:20.233 "num_blocks": 2097152, 00:22:20.233 "uuid": "a6b476f2-bca8-412b-9519-2abd325955ca", 00:22:20.233 "assigned_rate_limits": { 00:22:20.233 "rw_ios_per_sec": 0, 00:22:20.233 "rw_mbytes_per_sec": 0, 00:22:20.233 "r_mbytes_per_sec": 0, 00:22:20.233 "w_mbytes_per_sec": 0 00:22:20.233 }, 00:22:20.233 "claimed": false, 00:22:20.233 "zoned": false, 00:22:20.233 "supported_io_types": { 00:22:20.233 "read": true, 00:22:20.233 "write": true, 00:22:20.233 "unmap": false, 00:22:20.233 "flush": true, 00:22:20.233 "reset": true, 00:22:20.233 "nvme_admin": true, 00:22:20.233 "nvme_io": true, 00:22:20.233 "nvme_io_md": false, 00:22:20.233 "write_zeroes": true, 00:22:20.233 "zcopy": false, 00:22:20.233 "get_zone_info": false, 00:22:20.233 "zone_management": false, 00:22:20.233 "zone_append": false, 00:22:20.233 "compare": true, 00:22:20.233 "compare_and_write": true, 00:22:20.233 "abort": true, 00:22:20.233 "seek_hole": false, 00:22:20.233 "seek_data": false, 00:22:20.233 "copy": true, 00:22:20.233 "nvme_iov_md": false 00:22:20.233 }, 00:22:20.233 "memory_domains": [ 00:22:20.233 { 00:22:20.233 "dma_device_id": "system", 00:22:20.233 "dma_device_type": 1 00:22:20.233 } 00:22:20.233 ], 00:22:20.233 "driver_specific": { 00:22:20.233 "nvme": [ 00:22:20.233 { 00:22:20.233 "trid": { 00:22:20.233 "trtype": "TCP", 00:22:20.233 "adrfam": "IPv4", 00:22:20.233 "traddr": "10.0.0.2", 00:22:20.233 "trsvcid": "4420", 00:22:20.233 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.233 }, 00:22:20.233 "ctrlr_data": { 00:22:20.233 "cntlid": 1, 00:22:20.233 "vendor_id": "0x8086", 00:22:20.233 "model_number": "SPDK bdev Controller", 00:22:20.233 "serial_number": "00000000000000000000", 00:22:20.233 "firmware_revision": "24.09", 00:22:20.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.233 "oacs": { 00:22:20.233 "security": 0, 00:22:20.233 "format": 0, 00:22:20.233 "firmware": 0, 00:22:20.233 "ns_manage": 0 00:22:20.233 }, 00:22:20.233 "multi_ctrlr": true, 00:22:20.233 "ana_reporting": false 00:22:20.233 }, 00:22:20.233 "vs": { 00:22:20.233 "nvme_version": "1.3" 00:22:20.233 }, 00:22:20.233 "ns_data": { 00:22:20.233 "id": 1, 00:22:20.233 "can_share": true 00:22:20.233 } 00:22:20.233 } 00:22:20.233 ], 00:22:20.233 "mp_policy": "active_passive" 00:22:20.233 } 00:22:20.233 } 00:22:20.233 ] 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 [2024-07-25 14:49:40.314258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:20.233 [2024-07-25 14:49:40.314313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1221390 (9): Bad file descriptor 00:22:20.233 [2024-07-25 14:49:40.446125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.233 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 [ 00:22:20.233 { 00:22:20.233 "name": "nvme0n1", 00:22:20.233 "aliases": [ 00:22:20.233 "a6b476f2-bca8-412b-9519-2abd325955ca" 00:22:20.233 ], 00:22:20.233 "product_name": "NVMe disk", 00:22:20.233 "block_size": 512, 00:22:20.233 "num_blocks": 2097152, 00:22:20.233 "uuid": "a6b476f2-bca8-412b-9519-2abd325955ca", 00:22:20.233 "assigned_rate_limits": { 00:22:20.233 "rw_ios_per_sec": 0, 00:22:20.233 "rw_mbytes_per_sec": 0, 00:22:20.233 "r_mbytes_per_sec": 0, 00:22:20.233 "w_mbytes_per_sec": 0 00:22:20.233 }, 00:22:20.233 "claimed": false, 00:22:20.233 "zoned": false, 00:22:20.233 "supported_io_types": { 00:22:20.233 "read": true, 00:22:20.233 "write": true, 00:22:20.233 "unmap": false, 00:22:20.233 "flush": true, 00:22:20.233 "reset": true, 00:22:20.233 "nvme_admin": true, 00:22:20.233 "nvme_io": true, 00:22:20.233 "nvme_io_md": false, 00:22:20.233 "write_zeroes": true, 00:22:20.233 "zcopy": false, 00:22:20.233 "get_zone_info": false, 00:22:20.233 "zone_management": false, 00:22:20.233 "zone_append": false, 00:22:20.233 "compare": true, 00:22:20.233 "compare_and_write": true, 00:22:20.233 "abort": true, 00:22:20.233 "seek_hole": false, 00:22:20.233 "seek_data": false, 00:22:20.233 "copy": true, 00:22:20.233 "nvme_iov_md": false 00:22:20.233 }, 00:22:20.233 "memory_domains": [ 00:22:20.233 { 00:22:20.233 "dma_device_id": "system", 00:22:20.233 "dma_device_type": 1 00:22:20.233 } 00:22:20.233 ], 00:22:20.233 "driver_specific": { 00:22:20.234 "nvme": [ 00:22:20.234 { 00:22:20.234 "trid": { 00:22:20.234 "trtype": "TCP", 00:22:20.234 "adrfam": "IPv4", 00:22:20.234 "traddr": "10.0.0.2", 00:22:20.234 "trsvcid": "4420", 00:22:20.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.234 }, 00:22:20.234 "ctrlr_data": { 00:22:20.234 "cntlid": 2, 00:22:20.234 "vendor_id": "0x8086", 00:22:20.234 "model_number": "SPDK bdev Controller", 00:22:20.234 "serial_number": "00000000000000000000", 00:22:20.234 "firmware_revision": "24.09", 00:22:20.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.234 "oacs": { 00:22:20.234 "security": 0, 00:22:20.234 "format": 0, 00:22:20.234 "firmware": 0, 00:22:20.234 "ns_manage": 0 00:22:20.234 }, 00:22:20.234 "multi_ctrlr": true, 00:22:20.234 "ana_reporting": false 00:22:20.234 }, 00:22:20.234 "vs": { 00:22:20.234 "nvme_version": "1.3" 00:22:20.234 }, 00:22:20.234 "ns_data": { 00:22:20.234 "id": 1, 00:22:20.234 "can_share": true 00:22:20.234 } 00:22:20.234 } 00:22:20.234 ], 00:22:20.234 "mp_policy": "active_passive" 00:22:20.234 } 00:22:20.234 } 00:22:20.234 ] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NujjGyBIz6 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NujjGyBIz6 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.234 [2024-07-25 14:49:40.506847] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.234 [2024-07-25 14:49:40.506947] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NujjGyBIz6 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.234 [2024-07-25 14:49:40.514861] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NujjGyBIz6 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.234 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.493 [2024-07-25 14:49:40.526909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.493 [2024-07-25 14:49:40.526944] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.493 nvme0n1 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.493 [ 00:22:20.493 { 00:22:20.493 "name": "nvme0n1", 00:22:20.493 "aliases": [ 00:22:20.493 "a6b476f2-bca8-412b-9519-2abd325955ca" 00:22:20.493 ], 00:22:20.493 "product_name": "NVMe disk", 00:22:20.493 "block_size": 512, 00:22:20.493 "num_blocks": 2097152, 00:22:20.493 "uuid": "a6b476f2-bca8-412b-9519-2abd325955ca", 00:22:20.493 "assigned_rate_limits": { 00:22:20.493 "rw_ios_per_sec": 0, 00:22:20.493 "rw_mbytes_per_sec": 0, 00:22:20.493 "r_mbytes_per_sec": 0, 00:22:20.493 "w_mbytes_per_sec": 0 00:22:20.493 }, 00:22:20.493 "claimed": false, 00:22:20.493 "zoned": false, 00:22:20.493 "supported_io_types": { 00:22:20.493 "read": true, 00:22:20.493 "write": true, 00:22:20.493 "unmap": false, 00:22:20.493 "flush": true, 00:22:20.493 "reset": true, 00:22:20.493 "nvme_admin": true, 00:22:20.493 "nvme_io": true, 00:22:20.493 "nvme_io_md": false, 00:22:20.493 "write_zeroes": true, 00:22:20.493 "zcopy": false, 00:22:20.493 "get_zone_info": false, 00:22:20.493 "zone_management": false, 00:22:20.493 "zone_append": false, 00:22:20.493 "compare": true, 00:22:20.493 "compare_and_write": true, 00:22:20.493 "abort": true, 00:22:20.493 "seek_hole": false, 00:22:20.493 "seek_data": false, 00:22:20.493 "copy": true, 00:22:20.493 "nvme_iov_md": false 00:22:20.493 }, 00:22:20.493 "memory_domains": [ 00:22:20.493 { 00:22:20.493 "dma_device_id": "system", 00:22:20.493 "dma_device_type": 1 00:22:20.493 } 00:22:20.493 ], 00:22:20.493 "driver_specific": { 00:22:20.493 "nvme": [ 00:22:20.493 { 00:22:20.493 "trid": { 00:22:20.493 "trtype": "TCP", 00:22:20.493 "adrfam": "IPv4", 00:22:20.493 "traddr": "10.0.0.2", 00:22:20.493 "trsvcid": "4421", 00:22:20.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.493 }, 00:22:20.493 "ctrlr_data": { 00:22:20.493 "cntlid": 3, 00:22:20.493 "vendor_id": "0x8086", 00:22:20.493 "model_number": "SPDK bdev Controller", 00:22:20.493 "serial_number": "00000000000000000000", 00:22:20.493 "firmware_revision": "24.09", 00:22:20.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.493 "oacs": { 00:22:20.493 "security": 0, 00:22:20.493 "format": 0, 00:22:20.493 "firmware": 0, 00:22:20.493 "ns_manage": 0 00:22:20.493 }, 00:22:20.493 "multi_ctrlr": true, 00:22:20.493 "ana_reporting": false 00:22:20.493 }, 00:22:20.493 "vs": { 00:22:20.493 "nvme_version": "1.3" 00:22:20.493 }, 00:22:20.493 "ns_data": { 00:22:20.493 "id": 1, 00:22:20.493 "can_share": true 00:22:20.493 } 00:22:20.493 } 00:22:20.493 ], 00:22:20.493 "mp_policy": "active_passive" 00:22:20.493 } 00:22:20.493 } 00:22:20.493 ] 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.NujjGyBIz6 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.493 rmmod nvme_tcp 00:22:20.493 rmmod nvme_fabrics 00:22:20.493 rmmod nvme_keyring 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2403857 ']' 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2403857 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2403857 ']' 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2403857 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2403857 00:22:20.493 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:20.494 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:20.494 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2403857' 00:22:20.494 killing process with pid 2403857 00:22:20.494 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2403857 00:22:20.494 [2024-07-25 14:49:40.728991] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:20.494 [2024-07-25 14:49:40.729013] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:20.494 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2403857 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.752 14:49:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.297 14:49:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.297 00:22:23.297 real 0m9.314s 00:22:23.297 user 0m3.486s 00:22:23.297 sys 0m4.350s 00:22:23.297 14:49:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.297 14:49:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.297 ************************************ 00:22:23.297 END TEST nvmf_async_init 00:22:23.297 ************************************ 00:22:23.297 14:49:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:23.297 14:49:43 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:23.297 14:49:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:23.297 14:49:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.297 14:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.297 ************************************ 00:22:23.297 START TEST dma 00:22:23.297 ************************************ 00:22:23.298 14:49:43 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:23.298 * Looking for test storage... 00:22:23.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.298 14:49:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.298 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.298 14:49:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.299 14:49:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.299 14:49:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.299 14:49:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.299 14:49:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.299 14:49:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.299 14:49:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:23.299 14:49:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.299 14:49:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.300 14:49:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:23.300 14:49:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:23.300 00:22:23.300 real 0m0.120s 00:22:23.300 user 0m0.050s 00:22:23.300 sys 0m0.078s 00:22:23.300 14:49:43 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.300 14:49:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:23.300 ************************************ 00:22:23.300 END TEST dma 00:22:23.300 ************************************ 00:22:23.300 14:49:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:23.300 14:49:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:23.300 14:49:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:23.300 14:49:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.300 14:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.300 ************************************ 00:22:23.300 START TEST nvmf_identify 00:22:23.300 ************************************ 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:23.300 * Looking for test storage... 00:22:23.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.300 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.301 14:49:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.302 14:49:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:28.617 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:28.617 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:28.617 Found net devices under 0000:86:00.0: cvl_0_0 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:28.617 Found net devices under 0000:86:00.1: cvl_0_1 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.617 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:28.618 00:22:28.618 --- 10.0.0.2 ping statistics --- 00:22:28.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.618 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:28.618 00:22:28.618 --- 10.0.0.1 ping statistics --- 00:22:28.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.618 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2407526 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2407526 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2407526 ']' 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.618 14:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.618 [2024-07-25 14:49:48.794363] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:28.618 [2024-07-25 14:49:48.794412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.618 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.618 [2024-07-25 14:49:48.852613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.878 [2024-07-25 14:49:48.934366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.878 [2024-07-25 14:49:48.934405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.878 [2024-07-25 14:49:48.934412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.878 [2024-07-25 14:49:48.934418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.878 [2024-07-25 14:49:48.934423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.878 [2024-07-25 14:49:48.934482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.878 [2024-07-25 14:49:48.934578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.878 [2024-07-25 14:49:48.934664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.878 [2024-07-25 14:49:48.934665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 [2024-07-25 14:49:49.610813] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 Malloc0 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 [2024-07-25 14:49:49.698720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.447 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.447 [ 00:22:29.447 { 00:22:29.447 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.447 "subtype": "Discovery", 00:22:29.447 "listen_addresses": [ 00:22:29.447 { 00:22:29.447 "trtype": "TCP", 00:22:29.447 "adrfam": "IPv4", 00:22:29.447 "traddr": "10.0.0.2", 00:22:29.447 "trsvcid": "4420" 00:22:29.447 } 00:22:29.447 ], 00:22:29.447 "allow_any_host": true, 00:22:29.447 "hosts": [] 00:22:29.447 }, 00:22:29.447 { 00:22:29.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.447 "subtype": "NVMe", 00:22:29.447 "listen_addresses": [ 00:22:29.447 { 00:22:29.447 "trtype": "TCP", 00:22:29.447 "adrfam": "IPv4", 00:22:29.447 "traddr": "10.0.0.2", 00:22:29.447 "trsvcid": "4420" 00:22:29.447 } 00:22:29.447 ], 00:22:29.447 "allow_any_host": true, 00:22:29.447 "hosts": [], 00:22:29.447 "serial_number": "SPDK00000000000001", 00:22:29.447 "model_number": "SPDK bdev Controller", 00:22:29.447 "max_namespaces": 32, 00:22:29.447 "min_cntlid": 1, 00:22:29.447 "max_cntlid": 65519, 00:22:29.447 "namespaces": [ 00:22:29.447 { 00:22:29.447 "nsid": 1, 00:22:29.447 "bdev_name": "Malloc0", 00:22:29.447 "name": "Malloc0", 00:22:29.447 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:29.447 "eui64": "ABCDEF0123456789", 00:22:29.447 "uuid": "bd05c734-5343-4eeb-9412-c65263c53eea" 00:22:29.447 } 00:22:29.447 ] 00:22:29.447 } 00:22:29.447 ] 00:22:29.448 14:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.448 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:29.710 [2024-07-25 14:49:49.750988] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:29.710 [2024-07-25 14:49:49.751034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407767 ] 00:22:29.710 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.710 [2024-07-25 14:49:49.781579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:29.710 [2024-07-25 14:49:49.781632] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:29.710 [2024-07-25 14:49:49.781637] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:29.710 [2024-07-25 14:49:49.781648] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:29.710 [2024-07-25 14:49:49.781653] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:29.710 [2024-07-25 14:49:49.782253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:29.710 [2024-07-25 14:49:49.782282] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a01ec0 0 00:22:29.710 [2024-07-25 14:49:49.789054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:29.710 [2024-07-25 14:49:49.789064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:29.710 [2024-07-25 14:49:49.789069] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:29.710 [2024-07-25 14:49:49.789071] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:29.710 [2024-07-25 14:49:49.789105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.789111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.789115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.789128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:29.710 [2024-07-25 14:49:49.789144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.797052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.797060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.797063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.797079] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:29.710 [2024-07-25 14:49:49.797085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:29.710 [2024-07-25 14:49:49.797089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:29.710 [2024-07-25 14:49:49.797102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.797116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.797128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.797386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.797400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.797404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.797414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:29.710 [2024-07-25 14:49:49.797428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:29.710 [2024-07-25 14:49:49.797437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.797452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.797466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.797625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.797635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.797638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.797647] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:29.710 [2024-07-25 14:49:49.797657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.797665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.797679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.797692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.797873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.797884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.797887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.797896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.797908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.797915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.797922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.797934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.798143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.798156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.798159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.798167] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:29.710 [2024-07-25 14:49:49.798172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.798184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.798289] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:29.710 [2024-07-25 14:49:49.798293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.798302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.798317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.798331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.798488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.798498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.710 [2024-07-25 14:49:49.798501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.710 [2024-07-25 14:49:49.798509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:29.710 [2024-07-25 14:49:49.798520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.710 [2024-07-25 14:49:49.798527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.710 [2024-07-25 14:49:49.798533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.710 [2024-07-25 14:49:49.798545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.710 [2024-07-25 14:49:49.798695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.710 [2024-07-25 14:49:49.798705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.711 [2024-07-25 14:49:49.798708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.798712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.711 [2024-07-25 14:49:49.798716] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:29.711 [2024-07-25 14:49:49.798720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.798728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:29.711 [2024-07-25 14:49:49.798741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.798752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.798755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.798761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.711 [2024-07-25 14:49:49.798774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.711 [2024-07-25 14:49:49.799002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.711 [2024-07-25 14:49:49.799012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.711 [2024-07-25 14:49:49.799018] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799022] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a01ec0): datao=0, datal=4096, cccid=0 00:22:29.711 [2024-07-25 14:49:49.799026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84e40) on tqpair(0x1a01ec0): expected_datao=0, payload_size=4096 00:22:29.711 [2024-07-25 14:49:49.799030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799037] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799040] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.711 [2024-07-25 14:49:49.799159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.711 [2024-07-25 14:49:49.799162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.711 [2024-07-25 14:49:49.799174] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:29.711 [2024-07-25 14:49:49.799182] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:29.711 [2024-07-25 14:49:49.799186] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:29.711 [2024-07-25 14:49:49.799190] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:29.711 [2024-07-25 14:49:49.799195] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:29.711 [2024-07-25 14:49:49.799199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.799208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.799216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.711 [2024-07-25 14:49:49.799244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.711 [2024-07-25 14:49:49.799413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.711 [2024-07-25 14:49:49.799422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.711 [2024-07-25 14:49:49.799425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.711 [2024-07-25 14:49:49.799436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.711 [2024-07-25 14:49:49.799454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.711 [2024-07-25 14:49:49.799474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.711 [2024-07-25 14:49:49.799490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.711 [2024-07-25 14:49:49.799505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.799517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:29.711 [2024-07-25 14:49:49.799523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.711 [2024-07-25 14:49:49.799546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84e40, cid 0, qid 0 00:22:29.711 [2024-07-25 14:49:49.799551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84fc0, cid 1, qid 0 00:22:29.711 [2024-07-25 14:49:49.799555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85140, cid 2, qid 0 00:22:29.711 [2024-07-25 14:49:49.799559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.711 [2024-07-25 14:49:49.799563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85440, cid 4, qid 0 00:22:29.711 [2024-07-25 14:49:49.799770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.711 [2024-07-25 14:49:49.799780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.711 [2024-07-25 14:49:49.799783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85440) on tqpair=0x1a01ec0 00:22:29.711 [2024-07-25 14:49:49.799792] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:29.711 [2024-07-25 14:49:49.799797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:29.711 [2024-07-25 14:49:49.799809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.799813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.799819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.711 [2024-07-25 14:49:49.799831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85440, cid 4, qid 0 00:22:29.711 [2024-07-25 14:49:49.799992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.711 [2024-07-25 14:49:49.800003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.711 [2024-07-25 14:49:49.800006] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.800009] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a01ec0): datao=0, datal=4096, cccid=4 00:22:29.711 [2024-07-25 14:49:49.800013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a85440) on tqpair(0x1a01ec0): expected_datao=0, payload_size=4096 00:22:29.711 [2024-07-25 14:49:49.800020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.800300] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.800304] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.841262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.711 [2024-07-25 14:49:49.841276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.711 [2024-07-25 14:49:49.841279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.841283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85440) on tqpair=0x1a01ec0 00:22:29.711 [2024-07-25 14:49:49.841297] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:29.711 [2024-07-25 14:49:49.841321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.841325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.841332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.711 [2024-07-25 14:49:49.841338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.841341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.711 [2024-07-25 14:49:49.841344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a01ec0) 00:22:29.711 [2024-07-25 14:49:49.841350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.711 [2024-07-25 14:49:49.841366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85440, cid 4, qid 0 00:22:29.711 [2024-07-25 14:49:49.841371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a855c0, cid 5, qid 0 00:22:29.712 [2024-07-25 14:49:49.841580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.712 [2024-07-25 14:49:49.841590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.712 [2024-07-25 14:49:49.841593] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.841597] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a01ec0): datao=0, datal=1024, cccid=4 00:22:29.712 [2024-07-25 14:49:49.841601] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a85440) on tqpair(0x1a01ec0): expected_datao=0, payload_size=1024 00:22:29.712 [2024-07-25 14:49:49.841605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.841611] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.841614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.841619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.712 [2024-07-25 14:49:49.841624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.712 [2024-07-25 14:49:49.841627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.841630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a855c0) on tqpair=0x1a01ec0 00:22:29.712 [2024-07-25 14:49:49.885050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.712 [2024-07-25 14:49:49.885059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.712 [2024-07-25 14:49:49.885063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85440) on tqpair=0x1a01ec0 00:22:29.712 [2024-07-25 14:49:49.885082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a01ec0) 00:22:29.712 [2024-07-25 14:49:49.885093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.712 [2024-07-25 14:49:49.885114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85440, cid 4, qid 0 00:22:29.712 [2024-07-25 14:49:49.885586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.712 [2024-07-25 14:49:49.885592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.712 [2024-07-25 14:49:49.885595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885598] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a01ec0): datao=0, datal=3072, cccid=4 00:22:29.712 [2024-07-25 14:49:49.885602] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a85440) on tqpair(0x1a01ec0): expected_datao=0, payload_size=3072 00:22:29.712 [2024-07-25 14:49:49.885605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885611] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.712 [2024-07-25 14:49:49.885807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.712 [2024-07-25 14:49:49.885811] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85440) on tqpair=0x1a01ec0 00:22:29.712 [2024-07-25 14:49:49.885824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.885828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a01ec0) 00:22:29.712 [2024-07-25 14:49:49.885834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.712 [2024-07-25 14:49:49.885852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a85440, cid 4, qid 0 00:22:29.712 [2024-07-25 14:49:49.886255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.712 [2024-07-25 14:49:49.886261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.712 [2024-07-25 14:49:49.886264] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.886267] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a01ec0): datao=0, datal=8, cccid=4 00:22:29.712 [2024-07-25 14:49:49.886271] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a85440) on tqpair(0x1a01ec0): expected_datao=0, payload_size=8 00:22:29.712 [2024-07-25 14:49:49.886275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.886281] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.886284] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.927320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.712 [2024-07-25 14:49:49.927335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.712 [2024-07-25 14:49:49.927338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.712 [2024-07-25 14:49:49.927341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85440) on tqpair=0x1a01ec0 00:22:29.712 ===================================================== 00:22:29.712 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:29.712 ===================================================== 00:22:29.712 Controller Capabilities/Features 00:22:29.712 ================================ 00:22:29.712 Vendor ID: 0000 00:22:29.712 Subsystem Vendor ID: 0000 00:22:29.712 Serial Number: .................... 00:22:29.712 Model Number: ........................................ 00:22:29.712 Firmware Version: 24.09 00:22:29.712 Recommended Arb Burst: 0 00:22:29.712 IEEE OUI Identifier: 00 00 00 00:22:29.712 Multi-path I/O 00:22:29.712 May have multiple subsystem ports: No 00:22:29.712 May have multiple controllers: No 00:22:29.712 Associated with SR-IOV VF: No 00:22:29.712 Max Data Transfer Size: 131072 00:22:29.712 Max Number of Namespaces: 0 00:22:29.712 Max Number of I/O Queues: 1024 00:22:29.712 NVMe Specification Version (VS): 1.3 00:22:29.712 NVMe Specification Version (Identify): 1.3 00:22:29.712 Maximum Queue Entries: 128 00:22:29.712 Contiguous Queues Required: Yes 00:22:29.712 Arbitration Mechanisms Supported 00:22:29.712 Weighted Round Robin: Not Supported 00:22:29.712 Vendor Specific: Not Supported 00:22:29.712 Reset Timeout: 15000 ms 00:22:29.712 Doorbell Stride: 4 bytes 00:22:29.712 NVM Subsystem Reset: Not Supported 00:22:29.712 Command Sets Supported 00:22:29.712 NVM Command Set: Supported 00:22:29.712 Boot Partition: Not Supported 00:22:29.712 Memory Page Size Minimum: 4096 bytes 00:22:29.712 Memory Page Size Maximum: 4096 bytes 00:22:29.712 Persistent Memory Region: Not Supported 00:22:29.712 Optional Asynchronous Events Supported 00:22:29.712 Namespace Attribute Notices: Not Supported 00:22:29.712 Firmware Activation Notices: Not Supported 00:22:29.712 ANA Change Notices: Not Supported 00:22:29.712 PLE Aggregate Log Change Notices: Not Supported 00:22:29.712 LBA Status Info Alert Notices: Not Supported 00:22:29.712 EGE Aggregate Log Change Notices: Not Supported 00:22:29.712 Normal NVM Subsystem Shutdown event: Not Supported 00:22:29.712 Zone Descriptor Change Notices: Not Supported 00:22:29.712 Discovery Log Change Notices: Supported 00:22:29.712 Controller Attributes 00:22:29.712 128-bit Host Identifier: Not Supported 00:22:29.712 Non-Operational Permissive Mode: Not Supported 00:22:29.712 NVM Sets: Not Supported 00:22:29.712 Read Recovery Levels: Not Supported 00:22:29.712 Endurance Groups: Not Supported 00:22:29.712 Predictable Latency Mode: Not Supported 00:22:29.712 Traffic Based Keep ALive: Not Supported 00:22:29.712 Namespace Granularity: Not Supported 00:22:29.712 SQ Associations: Not Supported 00:22:29.712 UUID List: Not Supported 00:22:29.712 Multi-Domain Subsystem: Not Supported 00:22:29.712 Fixed Capacity Management: Not Supported 00:22:29.712 Variable Capacity Management: Not Supported 00:22:29.712 Delete Endurance Group: Not Supported 00:22:29.712 Delete NVM Set: Not Supported 00:22:29.712 Extended LBA Formats Supported: Not Supported 00:22:29.712 Flexible Data Placement Supported: Not Supported 00:22:29.712 00:22:29.712 Controller Memory Buffer Support 00:22:29.712 ================================ 00:22:29.712 Supported: No 00:22:29.712 00:22:29.712 Persistent Memory Region Support 00:22:29.712 ================================ 00:22:29.712 Supported: No 00:22:29.712 00:22:29.712 Admin Command Set Attributes 00:22:29.712 ============================ 00:22:29.712 Security Send/Receive: Not Supported 00:22:29.712 Format NVM: Not Supported 00:22:29.712 Firmware Activate/Download: Not Supported 00:22:29.712 Namespace Management: Not Supported 00:22:29.712 Device Self-Test: Not Supported 00:22:29.712 Directives: Not Supported 00:22:29.712 NVMe-MI: Not Supported 00:22:29.712 Virtualization Management: Not Supported 00:22:29.712 Doorbell Buffer Config: Not Supported 00:22:29.712 Get LBA Status Capability: Not Supported 00:22:29.712 Command & Feature Lockdown Capability: Not Supported 00:22:29.712 Abort Command Limit: 1 00:22:29.712 Async Event Request Limit: 4 00:22:29.712 Number of Firmware Slots: N/A 00:22:29.712 Firmware Slot 1 Read-Only: N/A 00:22:29.712 Firmware Activation Without Reset: N/A 00:22:29.712 Multiple Update Detection Support: N/A 00:22:29.712 Firmware Update Granularity: No Information Provided 00:22:29.712 Per-Namespace SMART Log: No 00:22:29.712 Asymmetric Namespace Access Log Page: Not Supported 00:22:29.712 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:29.712 Command Effects Log Page: Not Supported 00:22:29.712 Get Log Page Extended Data: Supported 00:22:29.713 Telemetry Log Pages: Not Supported 00:22:29.713 Persistent Event Log Pages: Not Supported 00:22:29.713 Supported Log Pages Log Page: May Support 00:22:29.713 Commands Supported & Effects Log Page: Not Supported 00:22:29.713 Feature Identifiers & Effects Log Page:May Support 00:22:29.713 NVMe-MI Commands & Effects Log Page: May Support 00:22:29.713 Data Area 4 for Telemetry Log: Not Supported 00:22:29.713 Error Log Page Entries Supported: 128 00:22:29.713 Keep Alive: Not Supported 00:22:29.713 00:22:29.713 NVM Command Set Attributes 00:22:29.713 ========================== 00:22:29.713 Submission Queue Entry Size 00:22:29.713 Max: 1 00:22:29.713 Min: 1 00:22:29.713 Completion Queue Entry Size 00:22:29.713 Max: 1 00:22:29.713 Min: 1 00:22:29.713 Number of Namespaces: 0 00:22:29.713 Compare Command: Not Supported 00:22:29.713 Write Uncorrectable Command: Not Supported 00:22:29.713 Dataset Management Command: Not Supported 00:22:29.713 Write Zeroes Command: Not Supported 00:22:29.713 Set Features Save Field: Not Supported 00:22:29.713 Reservations: Not Supported 00:22:29.713 Timestamp: Not Supported 00:22:29.713 Copy: Not Supported 00:22:29.713 Volatile Write Cache: Not Present 00:22:29.713 Atomic Write Unit (Normal): 1 00:22:29.713 Atomic Write Unit (PFail): 1 00:22:29.713 Atomic Compare & Write Unit: 1 00:22:29.713 Fused Compare & Write: Supported 00:22:29.713 Scatter-Gather List 00:22:29.713 SGL Command Set: Supported 00:22:29.713 SGL Keyed: Supported 00:22:29.713 SGL Bit Bucket Descriptor: Not Supported 00:22:29.713 SGL Metadata Pointer: Not Supported 00:22:29.713 Oversized SGL: Not Supported 00:22:29.713 SGL Metadata Address: Not Supported 00:22:29.713 SGL Offset: Supported 00:22:29.713 Transport SGL Data Block: Not Supported 00:22:29.713 Replay Protected Memory Block: Not Supported 00:22:29.713 00:22:29.713 Firmware Slot Information 00:22:29.713 ========================= 00:22:29.713 Active slot: 0 00:22:29.713 00:22:29.713 00:22:29.713 Error Log 00:22:29.713 ========= 00:22:29.713 00:22:29.713 Active Namespaces 00:22:29.713 ================= 00:22:29.713 Discovery Log Page 00:22:29.713 ================== 00:22:29.713 Generation Counter: 2 00:22:29.713 Number of Records: 2 00:22:29.713 Record Format: 0 00:22:29.713 00:22:29.713 Discovery Log Entry 0 00:22:29.713 ---------------------- 00:22:29.713 Transport Type: 3 (TCP) 00:22:29.713 Address Family: 1 (IPv4) 00:22:29.713 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:29.713 Entry Flags: 00:22:29.713 Duplicate Returned Information: 1 00:22:29.713 Explicit Persistent Connection Support for Discovery: 1 00:22:29.713 Transport Requirements: 00:22:29.713 Secure Channel: Not Required 00:22:29.713 Port ID: 0 (0x0000) 00:22:29.713 Controller ID: 65535 (0xffff) 00:22:29.713 Admin Max SQ Size: 128 00:22:29.713 Transport Service Identifier: 4420 00:22:29.713 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:29.713 Transport Address: 10.0.0.2 00:22:29.713 Discovery Log Entry 1 00:22:29.713 ---------------------- 00:22:29.713 Transport Type: 3 (TCP) 00:22:29.713 Address Family: 1 (IPv4) 00:22:29.713 Subsystem Type: 2 (NVM Subsystem) 00:22:29.713 Entry Flags: 00:22:29.713 Duplicate Returned Information: 0 00:22:29.713 Explicit Persistent Connection Support for Discovery: 0 00:22:29.713 Transport Requirements: 00:22:29.713 Secure Channel: Not Required 00:22:29.713 Port ID: 0 (0x0000) 00:22:29.713 Controller ID: 65535 (0xffff) 00:22:29.713 Admin Max SQ Size: 128 00:22:29.713 Transport Service Identifier: 4420 00:22:29.713 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:29.713 Transport Address: 10.0.0.2 [2024-07-25 14:49:49.927424] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:29.713 [2024-07-25 14:49:49.927436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84e40) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.713 [2024-07-25 14:49:49.927446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84fc0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.713 [2024-07-25 14:49:49.927454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a85140) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.713 [2024-07-25 14:49:49.927464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.713 [2024-07-25 14:49:49.927478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.713 [2024-07-25 14:49:49.927491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.713 [2024-07-25 14:49:49.927506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.713 [2024-07-25 14:49:49.927665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.713 [2024-07-25 14:49:49.927675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.713 [2024-07-25 14:49:49.927679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.713 [2024-07-25 14:49:49.927704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.713 [2024-07-25 14:49:49.927721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.713 [2024-07-25 14:49:49.927884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.713 [2024-07-25 14:49:49.927893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.713 [2024-07-25 14:49:49.927896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.927905] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:29.713 [2024-07-25 14:49:49.927909] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:29.713 [2024-07-25 14:49:49.927920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.927927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.713 [2024-07-25 14:49:49.927934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.713 [2024-07-25 14:49:49.927946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.713 [2024-07-25 14:49:49.928138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.713 [2024-07-25 14:49:49.928149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.713 [2024-07-25 14:49:49.928152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.928167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.713 [2024-07-25 14:49:49.928180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.713 [2024-07-25 14:49:49.928197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.713 [2024-07-25 14:49:49.928386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.713 [2024-07-25 14:49:49.928395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.713 [2024-07-25 14:49:49.928398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.713 [2024-07-25 14:49:49.928412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.713 [2024-07-25 14:49:49.928419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.713 [2024-07-25 14:49:49.928426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.713 [2024-07-25 14:49:49.928438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.713 [2024-07-25 14:49:49.928635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.713 [2024-07-25 14:49:49.928644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.713 [2024-07-25 14:49:49.928647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.714 [2024-07-25 14:49:49.928662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.714 [2024-07-25 14:49:49.928675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.714 [2024-07-25 14:49:49.928687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.714 [2024-07-25 14:49:49.928839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.714 [2024-07-25 14:49:49.928848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.714 [2024-07-25 14:49:49.928851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.714 [2024-07-25 14:49:49.928866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.928873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.714 [2024-07-25 14:49:49.928879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.714 [2024-07-25 14:49:49.928891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.714 [2024-07-25 14:49:49.933049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.714 [2024-07-25 14:49:49.933062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.714 [2024-07-25 14:49:49.933066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.933069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.714 [2024-07-25 14:49:49.933080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.933084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.933087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a01ec0) 00:22:29.714 [2024-07-25 14:49:49.933094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.714 [2024-07-25 14:49:49.933110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a852c0, cid 3, qid 0 00:22:29.714 [2024-07-25 14:49:49.933379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.714 [2024-07-25 14:49:49.933389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.714 [2024-07-25 14:49:49.933392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.714 [2024-07-25 14:49:49.933396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a852c0) on tqpair=0x1a01ec0 00:22:29.714 [2024-07-25 14:49:49.933405] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:29.714 00:22:29.714 14:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:29.714 [2024-07-25 14:49:49.971911] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:29.714 [2024-07-25 14:49:49.971957] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407769 ] 00:22:29.714 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.978 [2024-07-25 14:49:50.000277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:29.978 [2024-07-25 14:49:50.000322] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:29.978 [2024-07-25 14:49:50.000327] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:29.978 [2024-07-25 14:49:50.000339] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:29.978 [2024-07-25 14:49:50.000344] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:29.978 [2024-07-25 14:49:50.000876] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:29.978 [2024-07-25 14:49:50.000901] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x232cec0 0 00:22:29.978 [2024-07-25 14:49:50.015058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:29.978 [2024-07-25 14:49:50.015079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:29.978 [2024-07-25 14:49:50.015083] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:29.978 [2024-07-25 14:49:50.015087] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:29.978 [2024-07-25 14:49:50.015117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.978 [2024-07-25 14:49:50.015122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.978 [2024-07-25 14:49:50.015126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.978 [2024-07-25 14:49:50.015136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:29.978 [2024-07-25 14:49:50.015152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.978 [2024-07-25 14:49:50.023053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.978 [2024-07-25 14:49:50.023061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.978 [2024-07-25 14:49:50.023065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.978 [2024-07-25 14:49:50.023069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.978 [2024-07-25 14:49:50.023077] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:29.979 [2024-07-25 14:49:50.023084] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:29.979 [2024-07-25 14:49:50.023091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:29.979 [2024-07-25 14:49:50.023102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.023116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.023128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.023393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.023407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.023410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.023420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:29.979 [2024-07-25 14:49:50.023429] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:29.979 [2024-07-25 14:49:50.023437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.023453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.023468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.023644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.023658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.023661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.023671] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:29.979 [2024-07-25 14:49:50.023680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.023688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.023702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.023716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.023870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.023884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.023887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.023896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.023908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.023918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.023926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.023939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.024126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.024140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.024143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.024152] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:29.979 [2024-07-25 14:49:50.024157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.024166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.024271] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:29.979 [2024-07-25 14:49:50.024275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.024284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.024299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.024312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.024470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.024482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.024485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.024494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:29.979 [2024-07-25 14:49:50.024506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.024521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.024533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.024717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.024730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.024733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.024742] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:29.979 [2024-07-25 14:49:50.024747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:29.979 [2024-07-25 14:49:50.024760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:29.979 [2024-07-25 14:49:50.024768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:29.979 [2024-07-25 14:49:50.024777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.024781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.979 [2024-07-25 14:49:50.024788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.979 [2024-07-25 14:49:50.024801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.979 [2024-07-25 14:49:50.024992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.979 [2024-07-25 14:49:50.025005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.979 [2024-07-25 14:49:50.025008] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025012] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=4096, cccid=0 00:22:29.979 [2024-07-25 14:49:50.025016] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23afe40) on tqpair(0x232cec0): expected_datao=0, payload_size=4096 00:22:29.979 [2024-07-25 14:49:50.025021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025302] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025306] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.979 [2024-07-25 14:49:50.025477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.979 [2024-07-25 14:49:50.025480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.979 [2024-07-25 14:49:50.025493] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:29.979 [2024-07-25 14:49:50.025501] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:29.979 [2024-07-25 14:49:50.025505] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:29.979 [2024-07-25 14:49:50.025508] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:29.979 [2024-07-25 14:49:50.025512] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:29.979 [2024-07-25 14:49:50.025517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:29.979 [2024-07-25 14:49:50.025528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:29.979 [2024-07-25 14:49:50.025535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.979 [2024-07-25 14:49:50.025539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.980 [2024-07-25 14:49:50.025564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.980 [2024-07-25 14:49:50.025718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.980 [2024-07-25 14:49:50.025730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.980 [2024-07-25 14:49:50.025733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.980 [2024-07-25 14:49:50.025749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.980 [2024-07-25 14:49:50.025768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.980 [2024-07-25 14:49:50.025784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.980 [2024-07-25 14:49:50.025800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.980 [2024-07-25 14:49:50.025816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.025828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.025835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.025838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.025844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.980 [2024-07-25 14:49:50.025858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23afe40, cid 0, qid 0 00:22:29.980 [2024-07-25 14:49:50.025862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23affc0, cid 1, qid 0 00:22:29.980 [2024-07-25 14:49:50.025866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0140, cid 2, qid 0 00:22:29.980 [2024-07-25 14:49:50.025871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.980 [2024-07-25 14:49:50.025874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.980 [2024-07-25 14:49:50.026091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.980 [2024-07-25 14:49:50.026104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.980 [2024-07-25 14:49:50.026107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.980 [2024-07-25 14:49:50.026116] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:29.980 [2024-07-25 14:49:50.026122] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.026133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.026139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.026147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.026160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.980 [2024-07-25 14:49:50.026173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.980 [2024-07-25 14:49:50.026340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.980 [2024-07-25 14:49:50.026352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.980 [2024-07-25 14:49:50.026355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.980 [2024-07-25 14:49:50.026414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.026426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.026434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.026444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.980 [2024-07-25 14:49:50.026457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.980 [2024-07-25 14:49:50.026622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.980 [2024-07-25 14:49:50.026634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.980 [2024-07-25 14:49:50.026638] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026641] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=4096, cccid=4 00:22:29.980 [2024-07-25 14:49:50.026646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b0440) on tqpair(0x232cec0): expected_datao=0, payload_size=4096 00:22:29.980 [2024-07-25 14:49:50.026650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026915] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.026919] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.980 [2024-07-25 14:49:50.070075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.980 [2024-07-25 14:49:50.070078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.980 [2024-07-25 14:49:50.070095] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:29.980 [2024-07-25 14:49:50.070111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.070121] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.070129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.070142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.980 [2024-07-25 14:49:50.070156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.980 [2024-07-25 14:49:50.070431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.980 [2024-07-25 14:49:50.070445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.980 [2024-07-25 14:49:50.070448] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070451] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=4096, cccid=4 00:22:29.980 [2024-07-25 14:49:50.070456] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b0440) on tqpair(0x232cec0): expected_datao=0, payload_size=4096 00:22:29.980 [2024-07-25 14:49:50.070460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070729] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.070733] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.111284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.980 [2024-07-25 14:49:50.111302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.980 [2024-07-25 14:49:50.111306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.111309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.980 [2024-07-25 14:49:50.111326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.111338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:29.980 [2024-07-25 14:49:50.111347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.980 [2024-07-25 14:49:50.111352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.980 [2024-07-25 14:49:50.111361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.980 [2024-07-25 14:49:50.111377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.980 [2024-07-25 14:49:50.111763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.980 [2024-07-25 14:49:50.111768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.980 [2024-07-25 14:49:50.111771] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.111774] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=4096, cccid=4 00:22:29.981 [2024-07-25 14:49:50.111778] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b0440) on tqpair(0x232cec0): expected_datao=0, payload_size=4096 00:22:29.981 [2024-07-25 14:49:50.111782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.112058] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.112062] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.153304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.153307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.153321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153365] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:29.981 [2024-07-25 14:49:50.153369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:29.981 [2024-07-25 14:49:50.153374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:29.981 [2024-07-25 14:49:50.153390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.153402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.153408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.153421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.981 [2024-07-25 14:49:50.153437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.981 [2024-07-25 14:49:50.153442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b05c0, cid 5, qid 0 00:22:29.981 [2024-07-25 14:49:50.153619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.153633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.153636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.153648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.153653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.153658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b05c0) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.153673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.153684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.153699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b05c0, cid 5, qid 0 00:22:29.981 [2024-07-25 14:49:50.153853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.153864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.153867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b05c0) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.153886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.153890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.153896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.153909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b05c0, cid 5, qid 0 00:22:29.981 [2024-07-25 14:49:50.154076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.154088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.154091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b05c0) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.154106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.154116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.154129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b05c0, cid 5, qid 0 00:22:29.981 [2024-07-25 14:49:50.154287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.981 [2024-07-25 14:49:50.154297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.981 [2024-07-25 14:49:50.154300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b05c0) on tqpair=0x232cec0 00:22:29.981 [2024-07-25 14:49:50.154321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.154332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.154338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.154347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.154354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.154362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.154368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x232cec0) 00:22:29.981 [2024-07-25 14:49:50.154377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.981 [2024-07-25 14:49:50.154390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b05c0, cid 5, qid 0 00:22:29.981 [2024-07-25 14:49:50.154395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0440, cid 4, qid 0 00:22:29.981 [2024-07-25 14:49:50.154399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b0740, cid 6, qid 0 00:22:29.981 [2024-07-25 14:49:50.154403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b08c0, cid 7, qid 0 00:22:29.981 [2024-07-25 14:49:50.154652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.981 [2024-07-25 14:49:50.154668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.981 [2024-07-25 14:49:50.154672] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.154675] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=8192, cccid=5 00:22:29.981 [2024-07-25 14:49:50.154679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b05c0) on tqpair(0x232cec0): expected_datao=0, payload_size=8192 00:22:29.981 [2024-07-25 14:49:50.154683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155289] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155294] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.981 [2024-07-25 14:49:50.155303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.981 [2024-07-25 14:49:50.155306] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155309] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=512, cccid=4 00:22:29.981 [2024-07-25 14:49:50.155313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b0440) on tqpair(0x232cec0): expected_datao=0, payload_size=512 00:22:29.981 [2024-07-25 14:49:50.155316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155322] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155325] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.981 [2024-07-25 14:49:50.155334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.981 [2024-07-25 14:49:50.155337] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.981 [2024-07-25 14:49:50.155340] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=512, cccid=6 00:22:29.981 [2024-07-25 14:49:50.155344] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b0740) on tqpair(0x232cec0): expected_datao=0, payload_size=512 00:22:29.981 [2024-07-25 14:49:50.155347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155353] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155356] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.982 [2024-07-25 14:49:50.155365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.982 [2024-07-25 14:49:50.155368] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155371] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232cec0): datao=0, datal=4096, cccid=7 00:22:29.982 [2024-07-25 14:49:50.155375] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b08c0) on tqpair(0x232cec0): expected_datao=0, payload_size=4096 00:22:29.982 [2024-07-25 14:49:50.155379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155385] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155388] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.982 [2024-07-25 14:49:50.155635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.982 [2024-07-25 14:49:50.155639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b05c0) on tqpair=0x232cec0 00:22:29.982 [2024-07-25 14:49:50.155659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.982 [2024-07-25 14:49:50.155665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.982 [2024-07-25 14:49:50.155668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0440) on tqpair=0x232cec0 00:22:29.982 [2024-07-25 14:49:50.155682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.982 [2024-07-25 14:49:50.155687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.982 [2024-07-25 14:49:50.155690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0740) on tqpair=0x232cec0 00:22:29.982 [2024-07-25 14:49:50.155699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.982 [2024-07-25 14:49:50.155704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.982 [2024-07-25 14:49:50.155707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.982 [2024-07-25 14:49:50.155710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b08c0) on tqpair=0x232cec0 00:22:29.982 ===================================================== 00:22:29.982 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.982 ===================================================== 00:22:29.982 Controller Capabilities/Features 00:22:29.982 ================================ 00:22:29.982 Vendor ID: 8086 00:22:29.982 Subsystem Vendor ID: 8086 00:22:29.982 Serial Number: SPDK00000000000001 00:22:29.982 Model Number: SPDK bdev Controller 00:22:29.982 Firmware Version: 24.09 00:22:29.982 Recommended Arb Burst: 6 00:22:29.982 IEEE OUI Identifier: e4 d2 5c 00:22:29.982 Multi-path I/O 00:22:29.982 May have multiple subsystem ports: Yes 00:22:29.982 May have multiple controllers: Yes 00:22:29.982 Associated with SR-IOV VF: No 00:22:29.982 Max Data Transfer Size: 131072 00:22:29.982 Max Number of Namespaces: 32 00:22:29.982 Max Number of I/O Queues: 127 00:22:29.982 NVMe Specification Version (VS): 1.3 00:22:29.982 NVMe Specification Version (Identify): 1.3 00:22:29.982 Maximum Queue Entries: 128 00:22:29.982 Contiguous Queues Required: Yes 00:22:29.982 Arbitration Mechanisms Supported 00:22:29.982 Weighted Round Robin: Not Supported 00:22:29.982 Vendor Specific: Not Supported 00:22:29.982 Reset Timeout: 15000 ms 00:22:29.982 Doorbell Stride: 4 bytes 00:22:29.982 NVM Subsystem Reset: Not Supported 00:22:29.982 Command Sets Supported 00:22:29.982 NVM Command Set: Supported 00:22:29.982 Boot Partition: Not Supported 00:22:29.982 Memory Page Size Minimum: 4096 bytes 00:22:29.982 Memory Page Size Maximum: 4096 bytes 00:22:29.982 Persistent Memory Region: Not Supported 00:22:29.982 Optional Asynchronous Events Supported 00:22:29.982 Namespace Attribute Notices: Supported 00:22:29.982 Firmware Activation Notices: Not Supported 00:22:29.982 ANA Change Notices: Not Supported 00:22:29.982 PLE Aggregate Log Change Notices: Not Supported 00:22:29.982 LBA Status Info Alert Notices: Not Supported 00:22:29.982 EGE Aggregate Log Change Notices: Not Supported 00:22:29.982 Normal NVM Subsystem Shutdown event: Not Supported 00:22:29.982 Zone Descriptor Change Notices: Not Supported 00:22:29.982 Discovery Log Change Notices: Not Supported 00:22:29.982 Controller Attributes 00:22:29.982 128-bit Host Identifier: Supported 00:22:29.982 Non-Operational Permissive Mode: Not Supported 00:22:29.982 NVM Sets: Not Supported 00:22:29.982 Read Recovery Levels: Not Supported 00:22:29.982 Endurance Groups: Not Supported 00:22:29.982 Predictable Latency Mode: Not Supported 00:22:29.982 Traffic Based Keep ALive: Not Supported 00:22:29.982 Namespace Granularity: Not Supported 00:22:29.982 SQ Associations: Not Supported 00:22:29.982 UUID List: Not Supported 00:22:29.982 Multi-Domain Subsystem: Not Supported 00:22:29.982 Fixed Capacity Management: Not Supported 00:22:29.982 Variable Capacity Management: Not Supported 00:22:29.982 Delete Endurance Group: Not Supported 00:22:29.982 Delete NVM Set: Not Supported 00:22:29.982 Extended LBA Formats Supported: Not Supported 00:22:29.982 Flexible Data Placement Supported: Not Supported 00:22:29.982 00:22:29.982 Controller Memory Buffer Support 00:22:29.982 ================================ 00:22:29.982 Supported: No 00:22:29.982 00:22:29.982 Persistent Memory Region Support 00:22:29.982 ================================ 00:22:29.982 Supported: No 00:22:29.982 00:22:29.982 Admin Command Set Attributes 00:22:29.982 ============================ 00:22:29.982 Security Send/Receive: Not Supported 00:22:29.982 Format NVM: Not Supported 00:22:29.982 Firmware Activate/Download: Not Supported 00:22:29.982 Namespace Management: Not Supported 00:22:29.982 Device Self-Test: Not Supported 00:22:29.982 Directives: Not Supported 00:22:29.982 NVMe-MI: Not Supported 00:22:29.982 Virtualization Management: Not Supported 00:22:29.982 Doorbell Buffer Config: Not Supported 00:22:29.982 Get LBA Status Capability: Not Supported 00:22:29.982 Command & Feature Lockdown Capability: Not Supported 00:22:29.982 Abort Command Limit: 4 00:22:29.982 Async Event Request Limit: 4 00:22:29.982 Number of Firmware Slots: N/A 00:22:29.982 Firmware Slot 1 Read-Only: N/A 00:22:29.982 Firmware Activation Without Reset: N/A 00:22:29.982 Multiple Update Detection Support: N/A 00:22:29.982 Firmware Update Granularity: No Information Provided 00:22:29.982 Per-Namespace SMART Log: No 00:22:29.982 Asymmetric Namespace Access Log Page: Not Supported 00:22:29.982 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:29.982 Command Effects Log Page: Supported 00:22:29.982 Get Log Page Extended Data: Supported 00:22:29.982 Telemetry Log Pages: Not Supported 00:22:29.982 Persistent Event Log Pages: Not Supported 00:22:29.982 Supported Log Pages Log Page: May Support 00:22:29.982 Commands Supported & Effects Log Page: Not Supported 00:22:29.982 Feature Identifiers & Effects Log Page:May Support 00:22:29.982 NVMe-MI Commands & Effects Log Page: May Support 00:22:29.982 Data Area 4 for Telemetry Log: Not Supported 00:22:29.982 Error Log Page Entries Supported: 128 00:22:29.982 Keep Alive: Supported 00:22:29.982 Keep Alive Granularity: 10000 ms 00:22:29.982 00:22:29.982 NVM Command Set Attributes 00:22:29.982 ========================== 00:22:29.982 Submission Queue Entry Size 00:22:29.982 Max: 64 00:22:29.982 Min: 64 00:22:29.982 Completion Queue Entry Size 00:22:29.982 Max: 16 00:22:29.982 Min: 16 00:22:29.982 Number of Namespaces: 32 00:22:29.982 Compare Command: Supported 00:22:29.982 Write Uncorrectable Command: Not Supported 00:22:29.982 Dataset Management Command: Supported 00:22:29.982 Write Zeroes Command: Supported 00:22:29.982 Set Features Save Field: Not Supported 00:22:29.982 Reservations: Supported 00:22:29.982 Timestamp: Not Supported 00:22:29.982 Copy: Supported 00:22:29.982 Volatile Write Cache: Present 00:22:29.982 Atomic Write Unit (Normal): 1 00:22:29.982 Atomic Write Unit (PFail): 1 00:22:29.982 Atomic Compare & Write Unit: 1 00:22:29.982 Fused Compare & Write: Supported 00:22:29.982 Scatter-Gather List 00:22:29.982 SGL Command Set: Supported 00:22:29.982 SGL Keyed: Supported 00:22:29.982 SGL Bit Bucket Descriptor: Not Supported 00:22:29.982 SGL Metadata Pointer: Not Supported 00:22:29.982 Oversized SGL: Not Supported 00:22:29.982 SGL Metadata Address: Not Supported 00:22:29.982 SGL Offset: Supported 00:22:29.982 Transport SGL Data Block: Not Supported 00:22:29.982 Replay Protected Memory Block: Not Supported 00:22:29.982 00:22:29.982 Firmware Slot Information 00:22:29.982 ========================= 00:22:29.983 Active slot: 1 00:22:29.983 Slot 1 Firmware Revision: 24.09 00:22:29.983 00:22:29.983 00:22:29.983 Commands Supported and Effects 00:22:29.983 ============================== 00:22:29.983 Admin Commands 00:22:29.983 -------------- 00:22:29.983 Get Log Page (02h): Supported 00:22:29.983 Identify (06h): Supported 00:22:29.983 Abort (08h): Supported 00:22:29.983 Set Features (09h): Supported 00:22:29.983 Get Features (0Ah): Supported 00:22:29.983 Asynchronous Event Request (0Ch): Supported 00:22:29.983 Keep Alive (18h): Supported 00:22:29.983 I/O Commands 00:22:29.983 ------------ 00:22:29.983 Flush (00h): Supported LBA-Change 00:22:29.983 Write (01h): Supported LBA-Change 00:22:29.983 Read (02h): Supported 00:22:29.983 Compare (05h): Supported 00:22:29.983 Write Zeroes (08h): Supported LBA-Change 00:22:29.983 Dataset Management (09h): Supported LBA-Change 00:22:29.983 Copy (19h): Supported LBA-Change 00:22:29.983 00:22:29.983 Error Log 00:22:29.983 ========= 00:22:29.983 00:22:29.983 Arbitration 00:22:29.983 =========== 00:22:29.983 Arbitration Burst: 1 00:22:29.983 00:22:29.983 Power Management 00:22:29.983 ================ 00:22:29.983 Number of Power States: 1 00:22:29.983 Current Power State: Power State #0 00:22:29.983 Power State #0: 00:22:29.983 Max Power: 0.00 W 00:22:29.983 Non-Operational State: Operational 00:22:29.983 Entry Latency: Not Reported 00:22:29.983 Exit Latency: Not Reported 00:22:29.983 Relative Read Throughput: 0 00:22:29.983 Relative Read Latency: 0 00:22:29.983 Relative Write Throughput: 0 00:22:29.983 Relative Write Latency: 0 00:22:29.983 Idle Power: Not Reported 00:22:29.983 Active Power: Not Reported 00:22:29.983 Non-Operational Permissive Mode: Not Supported 00:22:29.983 00:22:29.983 Health Information 00:22:29.983 ================== 00:22:29.983 Critical Warnings: 00:22:29.983 Available Spare Space: OK 00:22:29.983 Temperature: OK 00:22:29.983 Device Reliability: OK 00:22:29.983 Read Only: No 00:22:29.983 Volatile Memory Backup: OK 00:22:29.983 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:29.983 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:29.983 Available Spare: 0% 00:22:29.983 Available Spare Threshold: 0% 00:22:29.983 Life Percentage Used:[2024-07-25 14:49:50.155800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.155804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x232cec0) 00:22:29.983 [2024-07-25 14:49:50.155811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.983 [2024-07-25 14:49:50.155824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b08c0, cid 7, qid 0 00:22:29.983 [2024-07-25 14:49:50.155998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.983 [2024-07-25 14:49:50.156007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.983 [2024-07-25 14:49:50.156011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.156014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b08c0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160057] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:29.983 [2024-07-25 14:49:50.160073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23afe40) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.983 [2024-07-25 14:49:50.160084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23affc0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.983 [2024-07-25 14:49:50.160092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b0140) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.983 [2024-07-25 14:49:50.160100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.983 [2024-07-25 14:49:50.160112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.983 [2024-07-25 14:49:50.160126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.983 [2024-07-25 14:49:50.160140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.983 [2024-07-25 14:49:50.160365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.983 [2024-07-25 14:49:50.160375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.983 [2024-07-25 14:49:50.160378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.983 [2024-07-25 14:49:50.160405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.983 [2024-07-25 14:49:50.160423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.983 [2024-07-25 14:49:50.160594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.983 [2024-07-25 14:49:50.160603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.983 [2024-07-25 14:49:50.160606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160614] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:29.983 [2024-07-25 14:49:50.160618] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:29.983 [2024-07-25 14:49:50.160629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.983 [2024-07-25 14:49:50.160643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.983 [2024-07-25 14:49:50.160655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.983 [2024-07-25 14:49:50.160811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.983 [2024-07-25 14:49:50.160820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.983 [2024-07-25 14:49:50.160823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.983 [2024-07-25 14:49:50.160839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.983 [2024-07-25 14:49:50.160846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.983 [2024-07-25 14:49:50.160852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.983 [2024-07-25 14:49:50.160864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.983 [2024-07-25 14:49:50.161017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.983 [2024-07-25 14:49:50.161026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.161029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.161051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.161070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.161085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.161237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.161247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.161250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.161268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.161281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.161294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.161446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.161456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.161459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.161474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.161487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.161499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.161655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.161665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.161668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.161683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.161697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.161709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.161862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.161871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.161874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.161889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.161896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.161902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.161914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.162078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.162089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.162092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.162110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.162123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.162136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.162288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.162300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.162302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.162317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.162331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.162343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.162495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.162505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.162508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.162522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.162536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.162548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.162701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.162710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.162713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.162728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.162741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.162753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.162909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.162919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.162922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.162939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.162946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.162953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.162964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.163124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.163134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.163137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.163151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.984 [2024-07-25 14:49:50.163165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.984 [2024-07-25 14:49:50.163177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.984 [2024-07-25 14:49:50.163554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.984 [2024-07-25 14:49:50.163560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.984 [2024-07-25 14:49:50.163563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.984 [2024-07-25 14:49:50.163576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.984 [2024-07-25 14:49:50.163582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.985 [2024-07-25 14:49:50.163588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.985 [2024-07-25 14:49:50.163599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.985 [2024-07-25 14:49:50.163752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.985 [2024-07-25 14:49:50.163761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.985 [2024-07-25 14:49:50.163764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.985 [2024-07-25 14:49:50.163780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.985 [2024-07-25 14:49:50.163793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.985 [2024-07-25 14:49:50.163805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.985 [2024-07-25 14:49:50.163959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.985 [2024-07-25 14:49:50.163968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.985 [2024-07-25 14:49:50.163971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.985 [2024-07-25 14:49:50.163985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.163995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.985 [2024-07-25 14:49:50.164001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.985 [2024-07-25 14:49:50.164013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.985 [2024-07-25 14:49:50.168051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.985 [2024-07-25 14:49:50.168060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.985 [2024-07-25 14:49:50.168063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.168066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.985 [2024-07-25 14:49:50.168076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.168080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.168083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232cec0) 00:22:29.985 [2024-07-25 14:49:50.168089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.985 [2024-07-25 14:49:50.168102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b02c0, cid 3, qid 0 00:22:29.985 [2024-07-25 14:49:50.168323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.985 [2024-07-25 14:49:50.168333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.985 [2024-07-25 14:49:50.168336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.985 [2024-07-25 14:49:50.168339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23b02c0) on tqpair=0x232cec0 00:22:29.985 [2024-07-25 14:49:50.168348] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:29.985 0% 00:22:29.985 Data Units Read: 0 00:22:29.985 Data Units Written: 0 00:22:29.985 Host Read Commands: 0 00:22:29.985 Host Write Commands: 0 00:22:29.985 Controller Busy Time: 0 minutes 00:22:29.985 Power Cycles: 0 00:22:29.985 Power On Hours: 0 hours 00:22:29.985 Unsafe Shutdowns: 0 00:22:29.985 Unrecoverable Media Errors: 0 00:22:29.985 Lifetime Error Log Entries: 0 00:22:29.985 Warning Temperature Time: 0 minutes 00:22:29.985 Critical Temperature Time: 0 minutes 00:22:29.985 00:22:29.985 Number of Queues 00:22:29.985 ================ 00:22:29.985 Number of I/O Submission Queues: 127 00:22:29.985 Number of I/O Completion Queues: 127 00:22:29.985 00:22:29.985 Active Namespaces 00:22:29.985 ================= 00:22:29.985 Namespace ID:1 00:22:29.985 Error Recovery Timeout: Unlimited 00:22:29.985 Command Set Identifier: NVM (00h) 00:22:29.985 Deallocate: Supported 00:22:29.985 Deallocated/Unwritten Error: Not Supported 00:22:29.985 Deallocated Read Value: Unknown 00:22:29.985 Deallocate in Write Zeroes: Not Supported 00:22:29.985 Deallocated Guard Field: 0xFFFF 00:22:29.985 Flush: Supported 00:22:29.985 Reservation: Supported 00:22:29.985 Namespace Sharing Capabilities: Multiple Controllers 00:22:29.985 Size (in LBAs): 131072 (0GiB) 00:22:29.985 Capacity (in LBAs): 131072 (0GiB) 00:22:29.985 Utilization (in LBAs): 131072 (0GiB) 00:22:29.985 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:29.985 EUI64: ABCDEF0123456789 00:22:29.985 UUID: bd05c734-5343-4eeb-9412-c65263c53eea 00:22:29.985 Thin Provisioning: Not Supported 00:22:29.985 Per-NS Atomic Units: Yes 00:22:29.985 Atomic Boundary Size (Normal): 0 00:22:29.985 Atomic Boundary Size (PFail): 0 00:22:29.985 Atomic Boundary Offset: 0 00:22:29.985 Maximum Single Source Range Length: 65535 00:22:29.985 Maximum Copy Length: 65535 00:22:29.985 Maximum Source Range Count: 1 00:22:29.985 NGUID/EUI64 Never Reused: No 00:22:29.985 Namespace Write Protected: No 00:22:29.985 Number of LBA Formats: 1 00:22:29.985 Current LBA Format: LBA Format #00 00:22:29.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:29.985 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.985 rmmod nvme_tcp 00:22:29.985 rmmod nvme_fabrics 00:22:29.985 rmmod nvme_keyring 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2407526 ']' 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2407526 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2407526 ']' 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2407526 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.985 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2407526 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2407526' 00:22:30.245 killing process with pid 2407526 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2407526 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2407526 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.245 14:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.784 14:49:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.784 00:22:32.784 real 0m9.343s 00:22:32.784 user 0m7.844s 00:22:32.784 sys 0m4.445s 00:22:32.784 14:49:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:32.784 14:49:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.784 ************************************ 00:22:32.784 END TEST nvmf_identify 00:22:32.784 ************************************ 00:22:32.784 14:49:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:32.784 14:49:52 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:32.784 14:49:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:32.784 14:49:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.784 14:49:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.784 ************************************ 00:22:32.784 START TEST nvmf_perf 00:22:32.784 ************************************ 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:32.784 * Looking for test storage... 00:22:32.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.784 14:49:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.785 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.785 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.785 14:49:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.785 14:49:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:38.064 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:38.064 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.064 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:38.065 Found net devices under 0000:86:00.0: cvl_0_0 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:38.065 Found net devices under 0000:86:00.1: cvl_0_1 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:38.065 00:22:38.065 --- 10.0.0.2 ping statistics --- 00:22:38.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.065 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:22:38.065 00:22:38.065 --- 10.0.0.1 ping statistics --- 00:22:38.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.065 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2411283 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2411283 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2411283 ']' 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.065 14:49:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:38.325 [2024-07-25 14:49:58.375495] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:22:38.325 [2024-07-25 14:49:58.375536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.325 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.325 [2024-07-25 14:49:58.433584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.325 [2024-07-25 14:49:58.513387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.325 [2024-07-25 14:49:58.513427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.325 [2024-07-25 14:49:58.513434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.325 [2024-07-25 14:49:58.513440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.325 [2024-07-25 14:49:58.513446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.325 [2024-07-25 14:49:58.513492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.325 [2024-07-25 14:49:58.513591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.325 [2024-07-25 14:49:58.513668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.325 [2024-07-25 14:49:58.513669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.894 14:49:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.894 14:49:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:38.894 14:49:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.894 14:49:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.894 14:49:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:39.154 14:49:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.154 14:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:39.154 14:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:42.446 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:42.705 [2024-07-25 14:50:02.771657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.705 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.965 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:42.965 14:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:42.965 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:42.965 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:43.223 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.483 [2024-07-25 14:50:03.527889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.483 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:43.483 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:43.483 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:43.483 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:43.483 14:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:44.863 Initializing NVMe Controllers 00:22:44.863 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:44.863 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:44.863 Initialization complete. Launching workers. 00:22:44.863 ======================================================== 00:22:44.863 Latency(us) 00:22:44.863 Device Information : IOPS MiB/s Average min max 00:22:44.863 PCIE (0000:5e:00.0) NSID 1 from core 0: 97795.74 382.01 326.74 24.03 7182.54 00:22:44.863 ======================================================== 00:22:44.863 Total : 97795.74 382.01 326.74 24.03 7182.54 00:22:44.863 00:22:44.863 14:50:04 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:44.863 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.243 Initializing NVMe Controllers 00:22:46.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.243 Initialization complete. Launching workers. 00:22:46.243 ======================================================== 00:22:46.243 Latency(us) 00:22:46.243 Device Information : IOPS MiB/s Average min max 00:22:46.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.00 0.33 12183.96 635.71 45381.08 00:22:46.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24572.93 7954.24 55866.93 00:22:46.243 ======================================================== 00:22:46.243 Total : 125.00 0.49 16247.54 635.71 55866.93 00:22:46.243 00:22:46.243 14:50:06 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:46.243 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.181 Initializing NVMe Controllers 00:22:47.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:47.182 Initialization complete. Launching workers. 00:22:47.182 ======================================================== 00:22:47.182 Latency(us) 00:22:47.182 Device Information : IOPS MiB/s Average min max 00:22:47.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7606.89 29.71 4219.55 836.18 11440.61 00:22:47.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.94 14.98 8387.19 3485.37 16248.21 00:22:47.182 ======================================================== 00:22:47.182 Total : 11442.83 44.70 5616.65 836.18 16248.21 00:22:47.182 00:22:47.182 14:50:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:47.182 14:50:07 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:47.182 14:50:07 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:47.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.722 Initializing NVMe Controllers 00:22:49.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.723 Controller IO queue size 128, less than required. 00:22:49.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.723 Controller IO queue size 128, less than required. 00:22:49.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:49.723 Initialization complete. Launching workers. 00:22:49.723 ======================================================== 00:22:49.723 Latency(us) 00:22:49.723 Device Information : IOPS MiB/s Average min max 00:22:49.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 785.50 196.37 171344.56 88073.52 291949.00 00:22:49.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.50 136.87 251147.87 141203.15 416723.21 00:22:49.723 ======================================================== 00:22:49.723 Total : 1333.00 333.25 204121.99 88073.52 416723.21 00:22:49.723 00:22:49.723 14:50:09 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:49.723 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.038 No valid NVMe controllers or AIO or URING devices found 00:22:50.038 Initializing NVMe Controllers 00:22:50.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.038 Controller IO queue size 128, less than required. 00:22:50.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.038 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:50.038 Controller IO queue size 128, less than required. 00:22:50.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.038 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:50.038 WARNING: Some requested NVMe devices were skipped 00:22:50.038 14:50:10 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:50.038 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.578 Initializing NVMe Controllers 00:22:52.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.578 Controller IO queue size 128, less than required. 00:22:52.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.578 Controller IO queue size 128, less than required. 00:22:52.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:52.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:52.578 Initialization complete. Launching workers. 00:22:52.578 00:22:52.578 ==================== 00:22:52.578 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:52.578 TCP transport: 00:22:52.578 polls: 63075 00:22:52.578 idle_polls: 21480 00:22:52.578 sock_completions: 41595 00:22:52.578 nvme_completions: 3201 00:22:52.578 submitted_requests: 4822 00:22:52.578 queued_requests: 1 00:22:52.578 00:22:52.578 ==================== 00:22:52.578 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:52.578 TCP transport: 00:22:52.579 polls: 62555 00:22:52.579 idle_polls: 20393 00:22:52.579 sock_completions: 42162 00:22:52.579 nvme_completions: 3193 00:22:52.579 submitted_requests: 4812 00:22:52.579 queued_requests: 1 00:22:52.579 ======================================================== 00:22:52.579 Latency(us) 00:22:52.579 Device Information : IOPS MiB/s Average min max 00:22:52.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 800.00 200.00 166534.18 80981.64 262465.47 00:22:52.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 798.00 199.50 163739.37 96336.45 238075.47 00:22:52.579 ======================================================== 00:22:52.579 Total : 1598.00 399.50 165138.52 80981.64 262465.47 00:22:52.579 00:22:52.579 14:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:52.579 14:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.839 rmmod nvme_tcp 00:22:52.839 rmmod nvme_fabrics 00:22:52.839 rmmod nvme_keyring 00:22:52.839 14:50:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2411283 ']' 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2411283 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2411283 ']' 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2411283 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2411283 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2411283' 00:22:52.839 killing process with pid 2411283 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2411283 00:22:52.839 14:50:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2411283 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.747 14:50:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.663 14:50:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.663 00:22:56.663 real 0m23.961s 00:22:56.663 user 1m4.612s 00:22:56.663 sys 0m6.761s 00:22:56.663 14:50:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.663 14:50:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:56.663 ************************************ 00:22:56.663 END TEST nvmf_perf 00:22:56.663 ************************************ 00:22:56.663 14:50:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:56.663 14:50:16 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:56.663 14:50:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:56.663 14:50:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.663 14:50:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.663 ************************************ 00:22:56.663 START TEST nvmf_fio_host 00:22:56.663 ************************************ 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:56.663 * Looking for test storage... 00:22:56.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.663 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.664 14:50:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:01.945 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:01.945 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:01.945 Found net devices under 0000:86:00.0: cvl_0_0 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.945 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:01.946 Found net devices under 0000:86:00.1: cvl_0_1 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:23:01.946 00:23:01.946 --- 10.0.0.2 ping statistics --- 00:23:01.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.946 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:23:01.946 00:23:01.946 --- 10.0.0.1 ping statistics --- 00:23:01.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.946 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2417369 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2417369 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2417369 ']' 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.946 14:50:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.946 [2024-07-25 14:50:22.014935] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:01.946 [2024-07-25 14:50:22.014978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.946 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.946 [2024-07-25 14:50:22.072100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.946 [2024-07-25 14:50:22.152858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.946 [2024-07-25 14:50:22.152893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.946 [2024-07-25 14:50:22.152899] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.946 [2024-07-25 14:50:22.152905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.946 [2024-07-25 14:50:22.152910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.946 [2024-07-25 14:50:22.152944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.946 [2024-07-25 14:50:22.153038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.946 [2024-07-25 14:50:22.153115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.946 [2024-07-25 14:50:22.153116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.885 14:50:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.885 14:50:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:02.885 14:50:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:02.885 [2024-07-25 14:50:23.000724] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.885 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:02.885 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.885 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.885 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:03.145 Malloc1 00:23:03.145 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.404 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.404 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.664 [2024-07-25 14:50:23.770977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.664 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:03.971 14:50:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:03.971 14:50:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:04.228 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:04.228 fio-3.35 00:23:04.228 Starting 1 thread 00:23:04.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.754 00:23:06.754 test: (groupid=0, jobs=1): err= 0: pid=2417943: Thu Jul 25 14:50:26 2024 00:23:06.754 read: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(81.3MiB/2005msec) 00:23:06.754 slat (nsec): min=1579, max=314163, avg=1859.73, stdev=2970.87 00:23:06.754 clat (usec): min=3781, max=23689, avg=7175.42, stdev=1925.63 00:23:06.754 lat (usec): min=3783, max=23700, avg=7177.28, stdev=1925.79 00:23:06.754 clat percentiles (usec): 00:23:06.754 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5866], 00:23:06.754 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6915], 00:23:06.754 | 70.00th=[ 7439], 80.00th=[ 8291], 90.00th=[ 9765], 95.00th=[11076], 00:23:06.754 | 99.00th=[13698], 99.50th=[15401], 99.90th=[19530], 99.95th=[22152], 00:23:06.754 | 99.99th=[22938] 00:23:06.754 bw ( KiB/s): min=39856, max=42832, per=99.87%, avg=41478.00, stdev=1228.42, samples=4 00:23:06.754 iops : min= 9964, max=10708, avg=10369.50, stdev=307.10, samples=4 00:23:06.754 write: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(81.4MiB/2005msec); 0 zone resets 00:23:06.754 slat (nsec): min=1643, max=264569, avg=1955.23, stdev=2084.44 00:23:06.754 clat (usec): min=2511, max=14405, avg=5070.89, stdev=948.30 00:23:06.754 lat (usec): min=2513, max=14420, avg=5072.85, stdev=948.63 00:23:06.754 clat percentiles (usec): 00:23:06.754 | 1.00th=[ 3326], 5.00th=[ 3884], 10.00th=[ 4146], 20.00th=[ 4424], 00:23:06.754 | 30.00th=[ 4621], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5080], 00:23:06.754 | 70.00th=[ 5276], 80.00th=[ 5538], 90.00th=[ 6128], 95.00th=[ 6783], 00:23:06.754 | 99.00th=[ 8225], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[13173], 00:23:06.754 | 99.99th=[14222] 00:23:06.754 bw ( KiB/s): min=40528, max=42704, per=100.00%, avg=41560.00, stdev=1039.34, samples=4 00:23:06.754 iops : min=10132, max=10676, avg=10390.00, stdev=259.84, samples=4 00:23:06.754 lat (msec) : 4=3.37%, 10=92.10%, 20=4.49%, 50=0.04% 00:23:06.754 cpu : usr=66.92%, sys=24.95%, ctx=62, majf=0, minf=6 00:23:06.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:06.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:06.754 issued rwts: total=20818,20828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:06.754 00:23:06.754 Run status group 0 (all jobs): 00:23:06.754 READ: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.3MB), run=2005-2005msec 00:23:06.754 WRITE: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=81.4MiB (85.3MB), run=2005-2005msec 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:06.754 14:50:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.754 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:06.754 fio-3.35 00:23:06.754 Starting 1 thread 00:23:06.754 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.282 00:23:09.282 test: (groupid=0, jobs=1): err= 0: pid=2418389: Thu Jul 25 14:50:29 2024 00:23:09.282 read: IOPS=8772, BW=137MiB/s (144MB/s)(275MiB/2006msec) 00:23:09.282 slat (nsec): min=2583, max=86735, avg=2920.57, stdev=1455.98 00:23:09.282 clat (usec): min=2008, max=43532, avg=9149.00, stdev=3769.96 00:23:09.282 lat (usec): min=2010, max=43535, avg=9151.92, stdev=3770.37 00:23:09.282 clat percentiles (usec): 00:23:09.282 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6652], 00:23:09.282 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 9110], 00:23:09.282 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[14615], 00:23:09.282 | 99.00th=[26870], 99.50th=[29492], 99.90th=[32375], 99.95th=[32637], 00:23:09.282 | 99.99th=[43254] 00:23:09.282 bw ( KiB/s): min=55456, max=79616, per=49.36%, avg=69280.00, stdev=11230.04, samples=4 00:23:09.282 iops : min= 3466, max= 4976, avg=4330.00, stdev=701.88, samples=4 00:23:09.282 write: IOPS=4996, BW=78.1MiB/s (81.9MB/s)(141MiB/1801msec); 0 zone resets 00:23:09.282 slat (usec): min=30, max=379, avg=32.71, stdev= 9.25 00:23:09.282 clat (usec): min=3133, max=35170, avg=9884.32, stdev=3840.70 00:23:09.282 lat (usec): min=3165, max=35217, avg=9917.03, stdev=3844.25 00:23:09.282 clat percentiles (usec): 00:23:09.282 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7767], 00:23:09.282 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:23:09.282 | 70.00th=[10028], 80.00th=[10814], 90.00th=[12125], 95.00th=[15533], 00:23:09.282 | 99.00th=[29754], 99.50th=[32637], 99.90th=[33162], 99.95th=[33817], 00:23:09.282 | 99.99th=[35390] 00:23:09.282 bw ( KiB/s): min=57056, max=82778, per=90.00%, avg=71950.50, stdev=11621.16, samples=4 00:23:09.282 iops : min= 3566, max= 5173, avg=4496.75, stdev=726.13, samples=4 00:23:09.282 lat (msec) : 4=0.36%, 10=69.53%, 20=27.39%, 50=2.72% 00:23:09.282 cpu : usr=84.39%, sys=12.67%, ctx=16, majf=0, minf=3 00:23:09.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:09.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:09.282 issued rwts: total=17597,8999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:09.282 00:23:09.282 Run status group 0 (all jobs): 00:23:09.282 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (288MB), run=2006-2006msec 00:23:09.282 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=141MiB (147MB), run=1801-1801msec 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.282 rmmod nvme_tcp 00:23:09.282 rmmod nvme_fabrics 00:23:09.282 rmmod nvme_keyring 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2417369 ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2417369 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2417369 ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2417369 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2417369 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2417369' 00:23:09.282 killing process with pid 2417369 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2417369 00:23:09.282 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2417369 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.541 14:50:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.074 14:50:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.074 00:23:12.074 real 0m15.102s 00:23:12.074 user 0m45.934s 00:23:12.074 sys 0m5.801s 00:23:12.074 14:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.074 14:50:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 ************************************ 00:23:12.074 END TEST nvmf_fio_host 00:23:12.074 ************************************ 00:23:12.074 14:50:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.074 14:50:31 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:12.074 14:50:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.074 14:50:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.074 14:50:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 ************************************ 00:23:12.074 START TEST nvmf_failover 00:23:12.074 ************************************ 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:12.074 * Looking for test storage... 00:23:12.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.074 14:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:17.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:17.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.346 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:17.347 Found net devices under 0000:86:00.0: cvl_0_0 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:17.347 Found net devices under 0000:86:00.1: cvl_0_1 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.347 14:50:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:23:17.347 00:23:17.347 --- 10.0.0.2 ping statistics --- 00:23:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.347 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:23:17.347 00:23:17.347 --- 10.0.0.1 ping statistics --- 00:23:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.347 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2422277 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2422277 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2422277 ']' 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.347 14:50:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.347 [2024-07-25 14:50:37.299487] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:17.347 [2024-07-25 14:50:37.299535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.347 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.347 [2024-07-25 14:50:37.358231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:17.348 [2024-07-25 14:50:37.437772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.348 [2024-07-25 14:50:37.437811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.348 [2024-07-25 14:50:37.437817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.348 [2024-07-25 14:50:37.437823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.348 [2024-07-25 14:50:37.437828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.348 [2024-07-25 14:50:37.437944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.348 [2024-07-25 14:50:37.438035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.348 [2024-07-25 14:50:37.438036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.947 14:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:18.205 [2024-07-25 14:50:38.307094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.205 14:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:18.463 Malloc0 00:23:18.463 14:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.463 14:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.720 14:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.978 [2024-07-25 14:50:39.047861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.978 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.978 [2024-07-25 14:50:39.228355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.978 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.236 [2024-07-25 14:50:39.408982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2422545 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2422545 /var/tmp/bdevperf.sock 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2422545 ']' 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.236 14:50:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.170 14:50:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.170 14:50:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:20.170 14:50:40 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.427 NVMe0n1 00:23:20.427 14:50:40 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.685 00:23:20.943 14:50:40 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.943 14:50:40 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2422786 00:23:20.943 14:50:41 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:21.878 14:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.878 [2024-07-25 14:50:42.170640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:21.878 [2024-07-25 14:50:42.170724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:21.878 [2024-07-25 14:50:42.170734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:21.878 [2024-07-25 14:50:42.170749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.136 [2024-07-25 14:50:42.170964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.170997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 [2024-07-25 14:50:42.171302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a090 is same with the state(5) to be set 00:23:22.137 14:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:25.418 14:50:45 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.418 00:23:25.418 14:50:45 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.677 [2024-07-25 14:50:45.765453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 [2024-07-25 14:50:45.765715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b600 is same with the state(5) to be set 00:23:25.677 14:50:45 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:28.959 14:50:48 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.959 [2024-07-25 14:50:48.962061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.959 14:50:48 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:29.895 14:50:49 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:29.895 [2024-07-25 14:50:50.162542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.895 [2024-07-25 14:50:50.162813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.162995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.896 [2024-07-25 14:50:50.163252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:29.897 [2024-07-25 14:50:50.163329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bce0 is same with the state(5) to be set 00:23:30.155 14:50:50 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2422786 00:23:36.713 0 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2422545 ']' 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2422545' 00:23:36.713 killing process with pid 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2422545 00:23:36.713 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.713 [2024-07-25 14:50:39.479705] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:36.713 [2024-07-25 14:50:39.479754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422545 ] 00:23:36.713 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.713 [2024-07-25 14:50:39.534302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.713 [2024-07-25 14:50:39.609168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.713 Running I/O for 15 seconds... 00:23:36.713 [2024-07-25 14:50:42.172161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.713 [2024-07-25 14:50:42.172444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.713 [2024-07-25 14:50:42.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.172987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.172993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.714 [2024-07-25 14:50:42.173001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.714 [2024-07-25 14:50:42.173007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.715 [2024-07-25 14:50:42.173523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.715 [2024-07-25 14:50:42.173537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.715 [2024-07-25 14:50:42.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.715 [2024-07-25 14:50:42.173551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.173991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.173999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.174005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.174021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.174035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.716 [2024-07-25 14:50:42.174053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.716 [2024-07-25 14:50:42.174077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.716 [2024-07-25 14:50:42.174085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:23:36.716 [2024-07-25 14:50:42.174091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174133] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13b8730 was disconnected and freed. reset controller. 00:23:36.716 [2024-07-25 14:50:42.174142] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:36.716 [2024-07-25 14:50:42.174160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.716 [2024-07-25 14:50:42.174168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.716 [2024-07-25 14:50:42.174175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.716 [2024-07-25 14:50:42.174182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:42.174189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.717 [2024-07-25 14:50:42.174195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:42.174203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.717 [2024-07-25 14:50:42.174209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:42.174215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.717 [2024-07-25 14:50:42.174240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1398540 (9): Bad file descriptor 00:23:36.717 [2024-07-25 14:50:42.177085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.717 [2024-07-25 14:50:42.247456] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.717 [2024-07-25 14:50:45.767007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.717 [2024-07-25 14:50:45.767159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.717 [2024-07-25 14:50:45.767503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.717 [2024-07-25 14:50:45.767510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.718 [2024-07-25 14:50:45.767975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.767990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.767998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.718 [2024-07-25 14:50:45.768005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.718 [2024-07-25 14:50:45.768013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.719 [2024-07-25 14:50:45.768331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.719 [2024-07-25 14:50:45.768573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.719 [2024-07-25 14:50:45.768579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.720 [2024-07-25 14:50:45.768898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.720 [2024-07-25 14:50:45.768922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.720 [2024-07-25 14:50:45.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:23:36.720 [2024-07-25 14:50:45.768936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.768976] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15633c0 was disconnected and freed. reset controller. 00:23:36.720 [2024-07-25 14:50:45.768984] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:36.720 [2024-07-25 14:50:45.769002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.720 [2024-07-25 14:50:45.769009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.769017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.720 [2024-07-25 14:50:45.769023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.769029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.720 [2024-07-25 14:50:45.769035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.769046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.720 [2024-07-25 14:50:45.769053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:45.769059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.720 [2024-07-25 14:50:45.771890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.720 [2024-07-25 14:50:45.771916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1398540 (9): Bad file descriptor 00:23:36.720 [2024-07-25 14:50:45.891407] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.720 [2024-07-25 14:50:50.164474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.720 [2024-07-25 14:50:50.164619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.720 [2024-07-25 14:50:50.164626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.164989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.164997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.721 [2024-07-25 14:50:50.165187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.721 [2024-07-25 14:50:50.165195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.722 [2024-07-25 14:50:50.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.722 [2024-07-25 14:50:50.165672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.722 [2024-07-25 14:50:50.165680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.165988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.165995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.723 [2024-07-25 14:50:50.166221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.723 [2024-07-25 14:50:50.166250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80416 len:8 PRP1 0x0 PRP2 0x0 00:23:36.723 [2024-07-25 14:50:50.166257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.723 [2024-07-25 14:50:50.166266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.723 [2024-07-25 14:50:50.166271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80424 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80432 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80440 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80448 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80456 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80464 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80472 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80488 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.724 [2024-07-25 14:50:50.166485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.724 [2024-07-25 14:50:50.166491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80496 len:8 PRP1 0x0 PRP2 0x0 00:23:36.724 [2024-07-25 14:50:50.166499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.166541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1563080 was disconnected and freed. reset controller. 00:23:36.724 [2024-07-25 14:50:50.166551] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:36.724 [2024-07-25 14:50:50.178394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.724 [2024-07-25 14:50:50.178408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.178419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.724 [2024-07-25 14:50:50.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.178437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.724 [2024-07-25 14:50:50.178446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.178455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.724 [2024-07-25 14:50:50.178464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.724 [2024-07-25 14:50:50.178473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.724 [2024-07-25 14:50:50.178509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1398540 (9): Bad file descriptor 00:23:36.724 [2024-07-25 14:50:50.182407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.724 [2024-07-25 14:50:50.217466] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.724 00:23:36.724 Latency(us) 00:23:36.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.724 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:36.724 Verification LBA range: start 0x0 length 0x4000 00:23:36.724 NVMe0n1 : 15.01 10979.40 42.89 689.16 0.00 10947.13 1510.18 32597.04 00:23:36.724 =================================================================================================================== 00:23:36.724 Total : 10979.40 42.89 689.16 0.00 10947.13 1510.18 32597.04 00:23:36.724 Received shutdown signal, test time was about 15.000000 seconds 00:23:36.724 00:23:36.724 Latency(us) 00:23:36.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.724 =================================================================================================================== 00:23:36.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2425306 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2425306 /var/tmp/bdevperf.sock 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2425306 ']' 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.724 14:50:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:36.982 14:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.982 14:50:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:36.982 14:50:57 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.239 [2024-07-25 14:50:57.388600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.239 14:50:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:37.496 [2024-07-25 14:50:57.569113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:37.496 14:50:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.754 NVMe0n1 00:23:37.754 14:50:57 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.319 00:23:38.319 14:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.576 00:23:38.576 14:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.576 14:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:38.576 14:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.833 14:50:58 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:42.146 14:51:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.146 14:51:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:42.146 14:51:02 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2426373 00:23:42.146 14:51:02 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.146 14:51:02 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2426373 00:23:43.092 0 00:23:43.092 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.092 [2024-07-25 14:50:56.421691] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:43.092 [2024-07-25 14:50:56.421741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425306 ] 00:23:43.092 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.092 [2024-07-25 14:50:56.475946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.092 [2024-07-25 14:50:56.545119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.092 [2024-07-25 14:50:58.961010] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:43.092 [2024-07-25 14:50:58.961058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.092 [2024-07-25 14:50:58.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.092 [2024-07-25 14:50:58.961079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.092 [2024-07-25 14:50:58.961086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.092 [2024-07-25 14:50:58.961093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.092 [2024-07-25 14:50:58.961100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.092 [2024-07-25 14:50:58.961107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.092 [2024-07-25 14:50:58.961115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.092 [2024-07-25 14:50:58.961121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.092 [2024-07-25 14:50:58.961148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.092 [2024-07-25 14:50:58.961161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2551540 (9): Bad file descriptor 00:23:43.092 [2024-07-25 14:50:58.972488] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:43.092 Running I/O for 1 seconds... 00:23:43.092 00:23:43.092 Latency(us) 00:23:43.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.092 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:43.092 Verification LBA range: start 0x0 length 0x4000 00:23:43.092 NVMe0n1 : 1.01 11774.19 45.99 0.00 0.00 10816.61 1695.39 11055.64 00:23:43.092 =================================================================================================================== 00:23:43.092 Total : 11774.19 45.99 0.00 0.00 10816.61 1695.39 11055.64 00:23:43.092 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.092 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:43.349 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.607 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:43.607 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.607 14:51:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.864 14:51:04 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2425306 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2425306 ']' 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2425306 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2425306 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2425306' 00:23:47.141 killing process with pid 2425306 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2425306 00:23:47.141 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2425306 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.398 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.398 rmmod nvme_tcp 00:23:47.398 rmmod nvme_fabrics 00:23:47.657 rmmod nvme_keyring 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2422277 ']' 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2422277 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2422277 ']' 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2422277 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2422277 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2422277' 00:23:47.657 killing process with pid 2422277 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2422277 00:23:47.657 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2422277 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.915 14:51:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.816 14:51:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.816 00:23:49.816 real 0m38.224s 00:23:49.816 user 2m3.717s 00:23:49.816 sys 0m7.387s 00:23:49.816 14:51:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.816 14:51:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.816 ************************************ 00:23:49.816 END TEST nvmf_failover 00:23:49.816 ************************************ 00:23:49.816 14:51:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.816 14:51:10 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:49.816 14:51:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.816 14:51:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.816 14:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.075 ************************************ 00:23:50.075 START TEST nvmf_host_discovery 00:23:50.075 ************************************ 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:50.075 * Looking for test storage... 00:23:50.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.075 14:51:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:55.341 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.341 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:55.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:55.342 Found net devices under 0000:86:00.0: cvl_0_0 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:55.342 Found net devices under 0000:86:00.1: cvl_0_1 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:55.342 00:23:55.342 --- 10.0.0.2 ping statistics --- 00:23:55.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.342 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:55.342 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:23:55.342 00:23:55.342 --- 10.0.0.1 ping statistics --- 00:23:55.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.342 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2431196 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2431196 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2431196 ']' 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.601 14:51:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.601 [2024-07-25 14:51:15.710975] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:55.601 [2024-07-25 14:51:15.711021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.601 [2024-07-25 14:51:15.768631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.601 [2024-07-25 14:51:15.847876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.601 [2024-07-25 14:51:15.847911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.601 [2024-07-25 14:51:15.847918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.601 [2024-07-25 14:51:15.847924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.601 [2024-07-25 14:51:15.847929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.601 [2024-07-25 14:51:15.847946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 [2024-07-25 14:51:16.551647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 [2024-07-25 14:51:16.563773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 null0 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 null1 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2431315 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2431315 /tmp/host.sock 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2431315 ']' 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:56.536 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.536 14:51:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.536 [2024-07-25 14:51:16.637421] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:23:56.536 [2024-07-25 14:51:16.637464] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431315 ] 00:23:56.536 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.536 [2024-07-25 14:51:16.690778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.536 [2024-07-25 14:51:16.768809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.469 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.470 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 [2024-07-25 14:51:17.795036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.728 14:51:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.728 14:51:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:57.728 14:51:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:58.293 [2024-07-25 14:51:18.513008] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:58.293 [2024-07-25 14:51:18.513028] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:58.293 [2024-07-25 14:51:18.513041] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.550 [2024-07-25 14:51:18.603315] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:58.550 [2024-07-25 14:51:18.788104] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.550 [2024-07-25 14:51:18.788122] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:58.807 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.808 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:59.066 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.067 [2024-07-25 14:51:19.327200] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.067 [2024-07-25 14:51:19.328355] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:59.067 [2024-07-25 14:51:19.328376] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.067 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.325 [2024-07-25 14:51:19.415629] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:59.325 14:51:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:59.325 [2024-07-25 14:51:19.601798] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:59.325 [2024-07-25 14:51:19.601815] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:59.325 [2024-07-25 14:51:19.601820] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.258 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.516 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.517 [2024-07-25 14:51:20.595453] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:00.517 [2024-07-25 14:51:20.595492] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.517 [2024-07-25 14:51:20.600127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.517 [2024-07-25 14:51:20.600145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.517 [2024-07-25 14:51:20.600154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.517 [2024-07-25 14:51:20.600161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.517 [2024-07-25 14:51:20.600169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.517 [2024-07-25 14:51:20.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.517 [2024-07-25 14:51:20.600184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.517 [2024-07-25 14:51:20.600191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.517 [2024-07-25 14:51:20.600198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.517 [2024-07-25 14:51:20.610143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.517 [2024-07-25 14:51:20.620179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.620587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.517 [2024-07-25 14:51:20.620602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.517 [2024-07-25 14:51:20.620610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 [2024-07-25 14:51:20.620622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 [2024-07-25 14:51:20.620639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.517 [2024-07-25 14:51:20.620646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.517 [2024-07-25 14:51:20.620654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.517 [2024-07-25 14:51:20.620664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.517 [2024-07-25 14:51:20.630237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.630711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.517 [2024-07-25 14:51:20.630723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.517 [2024-07-25 14:51:20.630730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 [2024-07-25 14:51:20.630746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 [2024-07-25 14:51:20.630762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.517 [2024-07-25 14:51:20.630768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.517 [2024-07-25 14:51:20.630774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.517 [2024-07-25 14:51:20.630784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.517 [2024-07-25 14:51:20.640285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.640686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.517 [2024-07-25 14:51:20.640698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.517 [2024-07-25 14:51:20.640705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 [2024-07-25 14:51:20.640715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 [2024-07-25 14:51:20.640727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.517 [2024-07-25 14:51:20.640733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.517 [2024-07-25 14:51:20.640739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.517 [2024-07-25 14:51:20.640748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.517 [2024-07-25 14:51:20.650336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.650884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.517 [2024-07-25 14:51:20.650898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.517 [2024-07-25 14:51:20.650905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 [2024-07-25 14:51:20.650915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 [2024-07-25 14:51:20.650930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.517 [2024-07-25 14:51:20.650937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.517 [2024-07-25 14:51:20.650943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.517 [2024-07-25 14:51:20.650952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.517 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.517 [2024-07-25 14:51:20.660386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.660908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.517 [2024-07-25 14:51:20.660920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.517 [2024-07-25 14:51:20.660927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.517 [2024-07-25 14:51:20.660939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.517 [2024-07-25 14:51:20.660948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.517 [2024-07-25 14:51:20.660958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.517 [2024-07-25 14:51:20.660964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.517 [2024-07-25 14:51:20.660980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.517 [2024-07-25 14:51:20.670439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.517 [2024-07-25 14:51:20.670857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.518 [2024-07-25 14:51:20.670869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.518 [2024-07-25 14:51:20.670876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.518 [2024-07-25 14:51:20.670887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.518 [2024-07-25 14:51:20.670896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.518 [2024-07-25 14:51:20.670902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.518 [2024-07-25 14:51:20.670909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.518 [2024-07-25 14:51:20.670918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.518 [2024-07-25 14:51:20.680490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:00.518 [2024-07-25 14:51:20.680880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.518 [2024-07-25 14:51:20.680891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea5f20 with addr=10.0.0.2, port=4420 00:24:00.518 [2024-07-25 14:51:20.680898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5f20 is same with the state(5) to be set 00:24:00.518 [2024-07-25 14:51:20.680908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5f20 (9): Bad file descriptor 00:24:00.518 [2024-07-25 14:51:20.680917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:00.518 [2024-07-25 14:51:20.680923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:00.518 [2024-07-25 14:51:20.680929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:00.518 [2024-07-25 14:51:20.680938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.518 [2024-07-25 14:51:20.685064] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:00.518 [2024-07-25 14:51:20.685081] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.518 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.776 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.776 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:00.776 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:00.776 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.777 14:51:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 [2024-07-25 14:51:22.022101] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:02.150 [2024-07-25 14:51:22.022119] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:02.150 [2024-07-25 14:51:22.022132] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:02.150 [2024-07-25 14:51:22.111395] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:02.150 [2024-07-25 14:51:22.180249] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:02.150 [2024-07-25 14:51:22.180276] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 request: 00:24:02.150 { 00:24:02.150 "name": "nvme", 00:24:02.150 "trtype": "tcp", 00:24:02.150 "traddr": "10.0.0.2", 00:24:02.150 "adrfam": "ipv4", 00:24:02.150 "trsvcid": "8009", 00:24:02.150 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:02.150 "wait_for_attach": true, 00:24:02.150 "method": "bdev_nvme_start_discovery", 00:24:02.150 "req_id": 1 00:24:02.150 } 00:24:02.150 Got JSON-RPC error response 00:24:02.150 response: 00:24:02.150 { 00:24:02.150 "code": -17, 00:24:02.150 "message": "File exists" 00:24:02.150 } 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 request: 00:24:02.150 { 00:24:02.150 "name": "nvme_second", 00:24:02.150 "trtype": "tcp", 00:24:02.150 "traddr": "10.0.0.2", 00:24:02.150 "adrfam": "ipv4", 00:24:02.150 "trsvcid": "8009", 00:24:02.150 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:02.150 "wait_for_attach": true, 00:24:02.150 "method": "bdev_nvme_start_discovery", 00:24:02.150 "req_id": 1 00:24:02.150 } 00:24:02.150 Got JSON-RPC error response 00:24:02.150 response: 00:24:02.150 { 00:24:02.150 "code": -17, 00:24:02.150 "message": "File exists" 00:24:02.150 } 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.150 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.151 14:51:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.522 [2024-07-25 14:51:23.429357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.522 [2024-07-25 14:51:23.429384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeef00 with addr=10.0.0.2, port=8010 00:24:03.522 [2024-07-25 14:51:23.429396] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:03.522 [2024-07-25 14:51:23.429402] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:03.522 [2024-07-25 14:51:23.429408] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:04.454 [2024-07-25 14:51:24.431808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.454 [2024-07-25 14:51:24.431832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeef00 with addr=10.0.0.2, port=8010 00:24:04.454 [2024-07-25 14:51:24.431842] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:04.454 [2024-07-25 14:51:24.431848] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:04.454 [2024-07-25 14:51:24.431854] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:05.445 [2024-07-25 14:51:25.433681] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:05.445 request: 00:24:05.445 { 00:24:05.445 "name": "nvme_second", 00:24:05.445 "trtype": "tcp", 00:24:05.445 "traddr": "10.0.0.2", 00:24:05.445 "adrfam": "ipv4", 00:24:05.445 "trsvcid": "8010", 00:24:05.445 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:05.445 "wait_for_attach": false, 00:24:05.445 "attach_timeout_ms": 3000, 00:24:05.445 "method": "bdev_nvme_start_discovery", 00:24:05.445 "req_id": 1 00:24:05.445 } 00:24:05.445 Got JSON-RPC error response 00:24:05.445 response: 00:24:05.445 { 00:24:05.445 "code": -110, 00:24:05.445 "message": "Connection timed out" 00:24:05.445 } 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2431315 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.445 rmmod nvme_tcp 00:24:05.445 rmmod nvme_fabrics 00:24:05.445 rmmod nvme_keyring 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2431196 ']' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2431196 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2431196 ']' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2431196 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2431196 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2431196' 00:24:05.445 killing process with pid 2431196 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2431196 00:24:05.445 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2431196 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.704 14:51:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.604 14:51:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.604 00:24:07.604 real 0m17.721s 00:24:07.604 user 0m22.241s 00:24:07.604 sys 0m5.485s 00:24:07.604 14:51:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.604 14:51:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.604 ************************************ 00:24:07.604 END TEST nvmf_host_discovery 00:24:07.604 ************************************ 00:24:07.604 14:51:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:07.604 14:51:27 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:07.604 14:51:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:07.604 14:51:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.604 14:51:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:07.863 ************************************ 00:24:07.863 START TEST nvmf_host_multipath_status 00:24:07.863 ************************************ 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:07.863 * Looking for test storage... 00:24:07.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.863 14:51:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.863 14:51:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.126 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.127 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.127 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.127 14:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:13.127 00:24:13.127 --- 10.0.0.2 ping statistics --- 00:24:13.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.127 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:24:13.127 00:24:13.127 --- 10.0.0.1 ping statistics --- 00:24:13.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.127 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2436292 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2436292 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2436292 ']' 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.127 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:13.127 [2024-07-25 14:51:33.144978] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:24:13.127 [2024-07-25 14:51:33.145025] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.127 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.127 [2024-07-25 14:51:33.200957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:13.127 [2024-07-25 14:51:33.280483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.127 [2024-07-25 14:51:33.280519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.127 [2024-07-25 14:51:33.280526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.127 [2024-07-25 14:51:33.280532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.127 [2024-07-25 14:51:33.280537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.127 [2024-07-25 14:51:33.280574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.127 [2024-07-25 14:51:33.280577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2436292 00:24:13.690 14:51:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.947 [2024-07-25 14:51:34.116804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.947 14:51:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:14.204 Malloc0 00:24:14.204 14:51:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:14.204 14:51:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.462 14:51:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.719 [2024-07-25 14:51:34.814469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.719 14:51:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.719 [2024-07-25 14:51:34.990935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2436704 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2436704 /var/tmp/bdevperf.sock 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2436704 ']' 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.719 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:15.651 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.651 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:15.651 14:51:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:15.909 14:51:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:16.167 Nvme0n1 00:24:16.167 14:51:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:16.424 Nvme0n1 00:24:16.682 14:51:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:16.682 14:51:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:18.581 14:51:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:18.581 14:51:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:18.839 14:51:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.839 14:51:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.212 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.470 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.470 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.470 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.470 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.727 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.727 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.728 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.728 14:51:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.986 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.986 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.986 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.986 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.244 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.244 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:21.244 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.244 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:21.502 14:51:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:22.433 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:22.433 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:22.433 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.433 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.691 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.691 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.691 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.691 14:51:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.948 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.948 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.948 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.948 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.207 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.464 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.464 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.464 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.464 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.722 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.722 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:23.722 14:51:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.722 14:51:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:23.980 14:51:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:24.914 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:24.914 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.914 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.172 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.172 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.172 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.172 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.172 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.430 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.430 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.430 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.430 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.688 14:51:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.946 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.946 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.946 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.946 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.206 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.206 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:26.206 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:26.488 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:26.488 14:51:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:27.432 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:27.432 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.432 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.432 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.690 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.690 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.690 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.690 14:51:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.948 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.948 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.948 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.948 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.206 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.463 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.464 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:28.464 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.464 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.721 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.721 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:28.721 14:51:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:28.978 14:51:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:28.978 14:51:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:29.910 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:29.910 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.910 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.910 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.167 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.167 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:30.167 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.167 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.426 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.426 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.426 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.426 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.684 14:51:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.941 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.941 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:30.941 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.941 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:31.199 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.199 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:31.199 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:31.199 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.456 14:51:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:32.389 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:32.389 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:32.389 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.389 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.647 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.647 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.647 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.647 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.905 14:51:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.905 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.168 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.168 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:33.168 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.168 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.429 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.429 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.429 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.429 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.686 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.686 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:33.686 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:33.686 14:51:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:33.943 14:51:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:34.201 14:51:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:35.133 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:35.133 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:35.133 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.133 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.391 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.391 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:35.391 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.391 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.649 14:51:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.906 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.906 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.906 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.906 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.164 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.164 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:36.164 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.164 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.421 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.421 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:36.421 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.421 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:36.679 14:51:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:37.620 14:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:37.620 14:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:37.620 14:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.620 14:51:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.883 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.883 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.883 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.883 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.141 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.399 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.399 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.399 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.399 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.656 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.656 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.656 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.656 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.914 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.914 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:38.914 14:51:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:38.914 14:51:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:39.172 14:51:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:40.105 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:40.105 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.105 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.105 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.363 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.363 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:40.363 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.363 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.621 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.621 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.621 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.621 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.879 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.879 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.879 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.879 14:52:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.879 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.879 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.879 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.879 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.137 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.137 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.137 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.137 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.395 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.395 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:41.395 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:41.652 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:41.652 14:52:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:43.024 14:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:43.024 14:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:43.024 14:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.024 14:52:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.024 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.284 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.284 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.284 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.284 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.541 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.843 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.843 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2436704 00:24:43.843 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2436704 ']' 00:24:43.843 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2436704 00:24:43.843 14:52:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436704 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436704' 00:24:43.843 killing process with pid 2436704 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2436704 00:24:43.843 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2436704 00:24:44.128 Connection closed with partial response: 00:24:44.128 00:24:44.128 00:24:44.128 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2436704 00:24:44.128 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.128 [2024-07-25 14:51:35.052628] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:24:44.128 [2024-07-25 14:51:35.052683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436704 ] 00:24:44.128 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.128 [2024-07-25 14:51:35.103810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.128 [2024-07-25 14:51:35.177758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.128 Running I/O for 90 seconds... 00:24:44.128 [2024-07-25 14:51:48.984535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.128 [2024-07-25 14:51:48.984740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.984990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.984997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.128 [2024-07-25 14:51:48.985142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:44.128 [2024-07-25 14:51:48.985155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.129 [2024-07-25 14:51:48.985162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.985174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.129 [2024-07-25 14:51:48.985180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.985193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.129 [2024-07-25 14:51:48.985199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.985212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.129 [2024-07-25 14:51:48.985219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.985965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.985976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.985993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.129 [2024-07-25 14:51:48.986707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.129 [2024-07-25 14:51:48.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.986958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.986965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.130 [2024-07-25 14:51:48.987550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:44.130 [2024-07-25 14:51:48.987642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.130 [2024-07-25 14:51:48.987649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:51:48.987952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.987977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.987995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:51:48.988146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:51:48.988153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:52:01.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.866983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.131 [2024-07-25 14:52:01.866991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.867003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.867010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.867023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.867030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.867051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.867058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:44.131 [2024-07-25 14:52:01.867071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.131 [2024-07-25 14:52:01.867077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.132 [2024-07-25 14:52:01.867463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.867541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:44.132 [2024-07-25 14:52:01.868226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.132 [2024-07-25 14:52:01.868233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.133 [2024-07-25 14:52:01.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.133 [2024-07-25 14:52:01.869420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.133 [2024-07-25 14:52:01.869427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:44.133 Received shutdown signal, test time was about 27.171277 seconds 00:24:44.133 00:24:44.133 Latency(us) 00:24:44.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.133 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.133 Verification LBA range: start 0x0 length 0x4000 00:24:44.133 Nvme0n1 : 27.17 10583.76 41.34 0.00 0.00 12071.14 463.03 3019898.88 00:24:44.133 =================================================================================================================== 00:24:44.133 Total : 10583.76 41.34 0.00 0.00 12071.14 463.03 3019898.88 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.133 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.391 rmmod nvme_tcp 00:24:44.391 rmmod nvme_fabrics 00:24:44.391 rmmod nvme_keyring 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2436292 ']' 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2436292 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2436292 ']' 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2436292 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436292 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436292' 00:24:44.391 killing process with pid 2436292 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2436292 00:24:44.391 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2436292 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.649 14:52:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.548 14:52:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:46.548 00:24:46.548 real 0m38.873s 00:24:46.548 user 1m45.922s 00:24:46.548 sys 0m10.381s 00:24:46.548 14:52:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:46.548 14:52:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 ************************************ 00:24:46.548 END TEST nvmf_host_multipath_status 00:24:46.548 ************************************ 00:24:46.548 14:52:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:46.548 14:52:06 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:46.548 14:52:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:46.548 14:52:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.548 14:52:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.805 ************************************ 00:24:46.805 START TEST nvmf_discovery_remove_ifc 00:24:46.805 ************************************ 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:46.805 * Looking for test storage... 00:24:46.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.805 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:46.806 14:52:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:52.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:52.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.067 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:52.068 Found net devices under 0000:86:00.0: cvl_0_0 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:52.068 Found net devices under 0000:86:00.1: cvl_0_1 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.068 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:24:52.327 00:24:52.327 --- 10.0.0.2 ping statistics --- 00:24:52.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.327 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.460 ms 00:24:52.327 00:24:52.327 --- 10.0.0.1 ping statistics --- 00:24:52.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.327 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2445068 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2445068 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2445068 ']' 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.327 14:52:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.327 [2024-07-25 14:52:12.556381] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:24:52.327 [2024-07-25 14:52:12.556429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.327 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.327 [2024-07-25 14:52:12.615743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.586 [2024-07-25 14:52:12.695212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.586 [2024-07-25 14:52:12.695246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.586 [2024-07-25 14:52:12.695253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.586 [2024-07-25 14:52:12.695259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.586 [2024-07-25 14:52:12.695264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.586 [2024-07-25 14:52:12.695284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.152 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.152 [2024-07-25 14:52:13.413095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.152 [2024-07-25 14:52:13.421227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:53.152 null0 00:24:53.410 [2024-07-25 14:52:13.453226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2445313 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2445313 /tmp/host.sock 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2445313 ']' 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:53.410 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.410 14:52:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.410 [2024-07-25 14:52:13.519942] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:24:53.410 [2024-07-25 14:52:13.519982] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445313 ] 00:24:53.410 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.410 [2024-07-25 14:52:13.571934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.410 [2024-07-25 14:52:13.651550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.344 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.345 14:52:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.277 [2024-07-25 14:52:15.483160] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:55.277 [2024-07-25 14:52:15.483180] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:55.277 [2024-07-25 14:52:15.483196] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:55.536 [2024-07-25 14:52:15.611583] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:55.536 [2024-07-25 14:52:15.677024] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:55.536 [2024-07-25 14:52:15.677077] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:55.536 [2024-07-25 14:52:15.677097] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:55.536 [2024-07-25 14:52:15.677110] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:55.536 [2024-07-25 14:52:15.677128] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.536 [2024-07-25 14:52:15.682085] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22b4e60 was disconnected and freed. delete nvme_qpair. 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:55.536 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.795 14:52:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.733 14:52:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.672 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.932 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.932 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.932 14:52:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.871 14:52:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.871 14:52:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.809 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.068 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.068 14:52:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.006 [2024-07-25 14:52:21.118140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:01.006 [2024-07-25 14:52:21.118180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.006 [2024-07-25 14:52:21.118191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.006 [2024-07-25 14:52:21.118200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.006 [2024-07-25 14:52:21.118207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.006 [2024-07-25 14:52:21.118214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.006 [2024-07-25 14:52:21.118221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.006 [2024-07-25 14:52:21.118228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.006 [2024-07-25 14:52:21.118235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.006 [2024-07-25 14:52:21.118242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.006 [2024-07-25 14:52:21.118249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.006 [2024-07-25 14:52:21.118255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b6a0 is same with the state(5) to be set 00:25:01.006 [2024-07-25 14:52:21.128155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227b6a0 (9): Bad file descriptor 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.006 [2024-07-25 14:52:21.138196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.006 14:52:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.946 [2024-07-25 14:52:22.146073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:01.946 [2024-07-25 14:52:22.146125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227b6a0 with addr=10.0.0.2, port=4420 00:25:01.946 [2024-07-25 14:52:22.146142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227b6a0 is same with the state(5) to be set 00:25:01.946 [2024-07-25 14:52:22.146202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227b6a0 (9): Bad file descriptor 00:25:01.946 [2024-07-25 14:52:22.146243] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:01.946 [2024-07-25 14:52:22.146261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:01.946 [2024-07-25 14:52:22.146270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:01.946 [2024-07-25 14:52:22.146281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:01.946 [2024-07-25 14:52:22.146302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.946 [2024-07-25 14:52:22.146312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.946 14:52:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.885 [2024-07-25 14:52:23.148797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:02.885 [2024-07-25 14:52:23.148819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:02.885 [2024-07-25 14:52:23.148826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:02.885 [2024-07-25 14:52:23.148834] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:02.885 [2024-07-25 14:52:23.148845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.885 [2024-07-25 14:52:23.148863] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:02.885 [2024-07-25 14:52:23.148884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.885 [2024-07-25 14:52:23.148893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.885 [2024-07-25 14:52:23.148902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.885 [2024-07-25 14:52:23.148909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.885 [2024-07-25 14:52:23.148916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.885 [2024-07-25 14:52:23.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.885 [2024-07-25 14:52:23.148929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.885 [2024-07-25 14:52:23.148936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.885 [2024-07-25 14:52:23.148946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.885 [2024-07-25 14:52:23.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.885 [2024-07-25 14:52:23.148959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:02.885 [2024-07-25 14:52:23.149658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227aa80 (9): Bad file descriptor 00:25:02.885 [2024-07-25 14:52:23.150666] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:02.885 [2024-07-25 14:52:23.150676] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:03.145 14:52:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:04.525 14:52:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.119 [2024-07-25 14:52:25.208271] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.119 [2024-07-25 14:52:25.208288] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.119 [2024-07-25 14:52:25.208302] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.120 [2024-07-25 14:52:25.296561] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:05.378 14:52:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:05.378 [2024-07-25 14:52:25.522749] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:05.378 [2024-07-25 14:52:25.522784] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:05.378 [2024-07-25 14:52:25.522802] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:05.378 [2024-07-25 14:52:25.522816] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:05.378 [2024-07-25 14:52:25.522823] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:05.378 [2024-07-25 14:52:25.527588] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2291a70 was disconnected and freed. delete nvme_qpair. 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2445313 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2445313 ']' 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2445313 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2445313 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2445313' 00:25:06.315 killing process with pid 2445313 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2445313 00:25:06.315 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2445313 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:06.574 rmmod nvme_tcp 00:25:06.574 rmmod nvme_fabrics 00:25:06.574 rmmod nvme_keyring 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2445068 ']' 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2445068 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2445068 ']' 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2445068 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.574 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2445068 00:25:06.834 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.834 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.834 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2445068' 00:25:06.834 killing process with pid 2445068 00:25:06.834 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2445068 00:25:06.834 14:52:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2445068 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.834 14:52:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.370 14:52:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:09.370 00:25:09.370 real 0m22.276s 00:25:09.370 user 0m28.798s 00:25:09.370 sys 0m5.596s 00:25:09.370 14:52:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:09.370 14:52:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.371 ************************************ 00:25:09.371 END TEST nvmf_discovery_remove_ifc 00:25:09.371 ************************************ 00:25:09.371 14:52:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:09.371 14:52:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:09.371 14:52:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:09.371 14:52:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.371 14:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.371 ************************************ 00:25:09.371 START TEST nvmf_identify_kernel_target 00:25:09.371 ************************************ 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:09.371 * Looking for test storage... 00:25:09.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.371 14:52:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.647 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.647 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.647 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.647 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.648 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:25:14.648 00:25:14.648 --- 10.0.0.2 ping statistics --- 00:25:14.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.648 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:25:14.648 00:25:14.648 --- 10.0.0.1 ping statistics --- 00:25:14.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.648 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:14.648 14:52:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:17.188 Waiting for block devices as requested 00:25:17.188 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:17.188 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.446 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.446 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.446 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.704 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.704 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.704 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.704 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:17.962 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.962 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.962 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.962 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:18.220 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.220 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.220 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.479 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:18.479 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.480 No valid GPT data, bailing 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:18.480 00:25:18.480 Discovery Log Number of Records 2, Generation counter 2 00:25:18.480 =====Discovery Log Entry 0====== 00:25:18.480 trtype: tcp 00:25:18.480 adrfam: ipv4 00:25:18.480 subtype: current discovery subsystem 00:25:18.480 treq: not specified, sq flow control disable supported 00:25:18.480 portid: 1 00:25:18.480 trsvcid: 4420 00:25:18.480 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.480 traddr: 10.0.0.1 00:25:18.480 eflags: none 00:25:18.480 sectype: none 00:25:18.480 =====Discovery Log Entry 1====== 00:25:18.480 trtype: tcp 00:25:18.480 adrfam: ipv4 00:25:18.480 subtype: nvme subsystem 00:25:18.480 treq: not specified, sq flow control disable supported 00:25:18.480 portid: 1 00:25:18.480 trsvcid: 4420 00:25:18.480 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:18.480 traddr: 10.0.0.1 00:25:18.480 eflags: none 00:25:18.480 sectype: none 00:25:18.480 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:18.480 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:18.480 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.480 ===================================================== 00:25:18.480 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:18.480 ===================================================== 00:25:18.480 Controller Capabilities/Features 00:25:18.480 ================================ 00:25:18.480 Vendor ID: 0000 00:25:18.480 Subsystem Vendor ID: 0000 00:25:18.480 Serial Number: edeb873dd0304fced88e 00:25:18.480 Model Number: Linux 00:25:18.480 Firmware Version: 6.7.0-68 00:25:18.480 Recommended Arb Burst: 0 00:25:18.480 IEEE OUI Identifier: 00 00 00 00:25:18.480 Multi-path I/O 00:25:18.480 May have multiple subsystem ports: No 00:25:18.480 May have multiple controllers: No 00:25:18.480 Associated with SR-IOV VF: No 00:25:18.480 Max Data Transfer Size: Unlimited 00:25:18.480 Max Number of Namespaces: 0 00:25:18.480 Max Number of I/O Queues: 1024 00:25:18.480 NVMe Specification Version (VS): 1.3 00:25:18.480 NVMe Specification Version (Identify): 1.3 00:25:18.480 Maximum Queue Entries: 1024 00:25:18.480 Contiguous Queues Required: No 00:25:18.480 Arbitration Mechanisms Supported 00:25:18.480 Weighted Round Robin: Not Supported 00:25:18.480 Vendor Specific: Not Supported 00:25:18.480 Reset Timeout: 7500 ms 00:25:18.480 Doorbell Stride: 4 bytes 00:25:18.480 NVM Subsystem Reset: Not Supported 00:25:18.480 Command Sets Supported 00:25:18.480 NVM Command Set: Supported 00:25:18.480 Boot Partition: Not Supported 00:25:18.480 Memory Page Size Minimum: 4096 bytes 00:25:18.480 Memory Page Size Maximum: 4096 bytes 00:25:18.480 Persistent Memory Region: Not Supported 00:25:18.480 Optional Asynchronous Events Supported 00:25:18.480 Namespace Attribute Notices: Not Supported 00:25:18.480 Firmware Activation Notices: Not Supported 00:25:18.480 ANA Change Notices: Not Supported 00:25:18.480 PLE Aggregate Log Change Notices: Not Supported 00:25:18.480 LBA Status Info Alert Notices: Not Supported 00:25:18.480 EGE Aggregate Log Change Notices: Not Supported 00:25:18.480 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.480 Zone Descriptor Change Notices: Not Supported 00:25:18.480 Discovery Log Change Notices: Supported 00:25:18.480 Controller Attributes 00:25:18.480 128-bit Host Identifier: Not Supported 00:25:18.480 Non-Operational Permissive Mode: Not Supported 00:25:18.480 NVM Sets: Not Supported 00:25:18.480 Read Recovery Levels: Not Supported 00:25:18.480 Endurance Groups: Not Supported 00:25:18.480 Predictable Latency Mode: Not Supported 00:25:18.480 Traffic Based Keep ALive: Not Supported 00:25:18.480 Namespace Granularity: Not Supported 00:25:18.480 SQ Associations: Not Supported 00:25:18.480 UUID List: Not Supported 00:25:18.480 Multi-Domain Subsystem: Not Supported 00:25:18.480 Fixed Capacity Management: Not Supported 00:25:18.480 Variable Capacity Management: Not Supported 00:25:18.480 Delete Endurance Group: Not Supported 00:25:18.480 Delete NVM Set: Not Supported 00:25:18.480 Extended LBA Formats Supported: Not Supported 00:25:18.480 Flexible Data Placement Supported: Not Supported 00:25:18.480 00:25:18.480 Controller Memory Buffer Support 00:25:18.480 ================================ 00:25:18.480 Supported: No 00:25:18.480 00:25:18.480 Persistent Memory Region Support 00:25:18.480 ================================ 00:25:18.480 Supported: No 00:25:18.480 00:25:18.480 Admin Command Set Attributes 00:25:18.480 ============================ 00:25:18.480 Security Send/Receive: Not Supported 00:25:18.480 Format NVM: Not Supported 00:25:18.480 Firmware Activate/Download: Not Supported 00:25:18.480 Namespace Management: Not Supported 00:25:18.480 Device Self-Test: Not Supported 00:25:18.480 Directives: Not Supported 00:25:18.480 NVMe-MI: Not Supported 00:25:18.480 Virtualization Management: Not Supported 00:25:18.480 Doorbell Buffer Config: Not Supported 00:25:18.480 Get LBA Status Capability: Not Supported 00:25:18.480 Command & Feature Lockdown Capability: Not Supported 00:25:18.480 Abort Command Limit: 1 00:25:18.480 Async Event Request Limit: 1 00:25:18.480 Number of Firmware Slots: N/A 00:25:18.480 Firmware Slot 1 Read-Only: N/A 00:25:18.480 Firmware Activation Without Reset: N/A 00:25:18.480 Multiple Update Detection Support: N/A 00:25:18.480 Firmware Update Granularity: No Information Provided 00:25:18.480 Per-Namespace SMART Log: No 00:25:18.480 Asymmetric Namespace Access Log Page: Not Supported 00:25:18.480 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:18.480 Command Effects Log Page: Not Supported 00:25:18.480 Get Log Page Extended Data: Supported 00:25:18.480 Telemetry Log Pages: Not Supported 00:25:18.480 Persistent Event Log Pages: Not Supported 00:25:18.480 Supported Log Pages Log Page: May Support 00:25:18.480 Commands Supported & Effects Log Page: Not Supported 00:25:18.480 Feature Identifiers & Effects Log Page:May Support 00:25:18.480 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.480 Data Area 4 for Telemetry Log: Not Supported 00:25:18.480 Error Log Page Entries Supported: 1 00:25:18.480 Keep Alive: Not Supported 00:25:18.480 00:25:18.480 NVM Command Set Attributes 00:25:18.480 ========================== 00:25:18.480 Submission Queue Entry Size 00:25:18.480 Max: 1 00:25:18.480 Min: 1 00:25:18.480 Completion Queue Entry Size 00:25:18.480 Max: 1 00:25:18.480 Min: 1 00:25:18.480 Number of Namespaces: 0 00:25:18.480 Compare Command: Not Supported 00:25:18.480 Write Uncorrectable Command: Not Supported 00:25:18.480 Dataset Management Command: Not Supported 00:25:18.480 Write Zeroes Command: Not Supported 00:25:18.480 Set Features Save Field: Not Supported 00:25:18.481 Reservations: Not Supported 00:25:18.481 Timestamp: Not Supported 00:25:18.481 Copy: Not Supported 00:25:18.481 Volatile Write Cache: Not Present 00:25:18.481 Atomic Write Unit (Normal): 1 00:25:18.481 Atomic Write Unit (PFail): 1 00:25:18.481 Atomic Compare & Write Unit: 1 00:25:18.481 Fused Compare & Write: Not Supported 00:25:18.481 Scatter-Gather List 00:25:18.481 SGL Command Set: Supported 00:25:18.481 SGL Keyed: Not Supported 00:25:18.481 SGL Bit Bucket Descriptor: Not Supported 00:25:18.481 SGL Metadata Pointer: Not Supported 00:25:18.481 Oversized SGL: Not Supported 00:25:18.481 SGL Metadata Address: Not Supported 00:25:18.481 SGL Offset: Supported 00:25:18.481 Transport SGL Data Block: Not Supported 00:25:18.481 Replay Protected Memory Block: Not Supported 00:25:18.481 00:25:18.481 Firmware Slot Information 00:25:18.481 ========================= 00:25:18.481 Active slot: 0 00:25:18.481 00:25:18.481 00:25:18.481 Error Log 00:25:18.481 ========= 00:25:18.481 00:25:18.481 Active Namespaces 00:25:18.481 ================= 00:25:18.481 Discovery Log Page 00:25:18.481 ================== 00:25:18.481 Generation Counter: 2 00:25:18.481 Number of Records: 2 00:25:18.481 Record Format: 0 00:25:18.481 00:25:18.481 Discovery Log Entry 0 00:25:18.481 ---------------------- 00:25:18.481 Transport Type: 3 (TCP) 00:25:18.481 Address Family: 1 (IPv4) 00:25:18.481 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:18.481 Entry Flags: 00:25:18.481 Duplicate Returned Information: 0 00:25:18.481 Explicit Persistent Connection Support for Discovery: 0 00:25:18.481 Transport Requirements: 00:25:18.481 Secure Channel: Not Specified 00:25:18.481 Port ID: 1 (0x0001) 00:25:18.481 Controller ID: 65535 (0xffff) 00:25:18.481 Admin Max SQ Size: 32 00:25:18.481 Transport Service Identifier: 4420 00:25:18.481 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:18.481 Transport Address: 10.0.0.1 00:25:18.481 Discovery Log Entry 1 00:25:18.481 ---------------------- 00:25:18.481 Transport Type: 3 (TCP) 00:25:18.481 Address Family: 1 (IPv4) 00:25:18.481 Subsystem Type: 2 (NVM Subsystem) 00:25:18.481 Entry Flags: 00:25:18.481 Duplicate Returned Information: 0 00:25:18.481 Explicit Persistent Connection Support for Discovery: 0 00:25:18.481 Transport Requirements: 00:25:18.481 Secure Channel: Not Specified 00:25:18.481 Port ID: 1 (0x0001) 00:25:18.481 Controller ID: 65535 (0xffff) 00:25:18.481 Admin Max SQ Size: 32 00:25:18.481 Transport Service Identifier: 4420 00:25:18.481 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:18.481 Transport Address: 10.0.0.1 00:25:18.481 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:18.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.481 get_feature(0x01) failed 00:25:18.481 get_feature(0x02) failed 00:25:18.481 get_feature(0x04) failed 00:25:18.481 ===================================================== 00:25:18.481 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:18.481 ===================================================== 00:25:18.481 Controller Capabilities/Features 00:25:18.481 ================================ 00:25:18.481 Vendor ID: 0000 00:25:18.481 Subsystem Vendor ID: 0000 00:25:18.481 Serial Number: 2ab5f1b92e8345bfca7f 00:25:18.481 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:18.481 Firmware Version: 6.7.0-68 00:25:18.481 Recommended Arb Burst: 6 00:25:18.481 IEEE OUI Identifier: 00 00 00 00:25:18.481 Multi-path I/O 00:25:18.481 May have multiple subsystem ports: Yes 00:25:18.481 May have multiple controllers: Yes 00:25:18.481 Associated with SR-IOV VF: No 00:25:18.481 Max Data Transfer Size: Unlimited 00:25:18.481 Max Number of Namespaces: 1024 00:25:18.481 Max Number of I/O Queues: 128 00:25:18.481 NVMe Specification Version (VS): 1.3 00:25:18.481 NVMe Specification Version (Identify): 1.3 00:25:18.481 Maximum Queue Entries: 1024 00:25:18.481 Contiguous Queues Required: No 00:25:18.481 Arbitration Mechanisms Supported 00:25:18.481 Weighted Round Robin: Not Supported 00:25:18.481 Vendor Specific: Not Supported 00:25:18.481 Reset Timeout: 7500 ms 00:25:18.481 Doorbell Stride: 4 bytes 00:25:18.481 NVM Subsystem Reset: Not Supported 00:25:18.481 Command Sets Supported 00:25:18.481 NVM Command Set: Supported 00:25:18.481 Boot Partition: Not Supported 00:25:18.481 Memory Page Size Minimum: 4096 bytes 00:25:18.481 Memory Page Size Maximum: 4096 bytes 00:25:18.481 Persistent Memory Region: Not Supported 00:25:18.481 Optional Asynchronous Events Supported 00:25:18.481 Namespace Attribute Notices: Supported 00:25:18.481 Firmware Activation Notices: Not Supported 00:25:18.481 ANA Change Notices: Supported 00:25:18.481 PLE Aggregate Log Change Notices: Not Supported 00:25:18.481 LBA Status Info Alert Notices: Not Supported 00:25:18.481 EGE Aggregate Log Change Notices: Not Supported 00:25:18.481 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.481 Zone Descriptor Change Notices: Not Supported 00:25:18.481 Discovery Log Change Notices: Not Supported 00:25:18.481 Controller Attributes 00:25:18.481 128-bit Host Identifier: Supported 00:25:18.481 Non-Operational Permissive Mode: Not Supported 00:25:18.481 NVM Sets: Not Supported 00:25:18.481 Read Recovery Levels: Not Supported 00:25:18.481 Endurance Groups: Not Supported 00:25:18.481 Predictable Latency Mode: Not Supported 00:25:18.481 Traffic Based Keep ALive: Supported 00:25:18.481 Namespace Granularity: Not Supported 00:25:18.481 SQ Associations: Not Supported 00:25:18.481 UUID List: Not Supported 00:25:18.481 Multi-Domain Subsystem: Not Supported 00:25:18.481 Fixed Capacity Management: Not Supported 00:25:18.481 Variable Capacity Management: Not Supported 00:25:18.481 Delete Endurance Group: Not Supported 00:25:18.481 Delete NVM Set: Not Supported 00:25:18.481 Extended LBA Formats Supported: Not Supported 00:25:18.481 Flexible Data Placement Supported: Not Supported 00:25:18.481 00:25:18.481 Controller Memory Buffer Support 00:25:18.481 ================================ 00:25:18.481 Supported: No 00:25:18.481 00:25:18.481 Persistent Memory Region Support 00:25:18.481 ================================ 00:25:18.481 Supported: No 00:25:18.481 00:25:18.481 Admin Command Set Attributes 00:25:18.481 ============================ 00:25:18.481 Security Send/Receive: Not Supported 00:25:18.481 Format NVM: Not Supported 00:25:18.481 Firmware Activate/Download: Not Supported 00:25:18.481 Namespace Management: Not Supported 00:25:18.481 Device Self-Test: Not Supported 00:25:18.481 Directives: Not Supported 00:25:18.481 NVMe-MI: Not Supported 00:25:18.481 Virtualization Management: Not Supported 00:25:18.481 Doorbell Buffer Config: Not Supported 00:25:18.481 Get LBA Status Capability: Not Supported 00:25:18.481 Command & Feature Lockdown Capability: Not Supported 00:25:18.481 Abort Command Limit: 4 00:25:18.481 Async Event Request Limit: 4 00:25:18.481 Number of Firmware Slots: N/A 00:25:18.481 Firmware Slot 1 Read-Only: N/A 00:25:18.481 Firmware Activation Without Reset: N/A 00:25:18.481 Multiple Update Detection Support: N/A 00:25:18.481 Firmware Update Granularity: No Information Provided 00:25:18.481 Per-Namespace SMART Log: Yes 00:25:18.481 Asymmetric Namespace Access Log Page: Supported 00:25:18.481 ANA Transition Time : 10 sec 00:25:18.481 00:25:18.481 Asymmetric Namespace Access Capabilities 00:25:18.481 ANA Optimized State : Supported 00:25:18.481 ANA Non-Optimized State : Supported 00:25:18.481 ANA Inaccessible State : Supported 00:25:18.481 ANA Persistent Loss State : Supported 00:25:18.481 ANA Change State : Supported 00:25:18.481 ANAGRPID is not changed : No 00:25:18.481 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:18.481 00:25:18.481 ANA Group Identifier Maximum : 128 00:25:18.481 Number of ANA Group Identifiers : 128 00:25:18.481 Max Number of Allowed Namespaces : 1024 00:25:18.481 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:18.481 Command Effects Log Page: Supported 00:25:18.481 Get Log Page Extended Data: Supported 00:25:18.481 Telemetry Log Pages: Not Supported 00:25:18.481 Persistent Event Log Pages: Not Supported 00:25:18.481 Supported Log Pages Log Page: May Support 00:25:18.481 Commands Supported & Effects Log Page: Not Supported 00:25:18.481 Feature Identifiers & Effects Log Page:May Support 00:25:18.482 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.482 Data Area 4 for Telemetry Log: Not Supported 00:25:18.482 Error Log Page Entries Supported: 128 00:25:18.482 Keep Alive: Supported 00:25:18.482 Keep Alive Granularity: 1000 ms 00:25:18.482 00:25:18.482 NVM Command Set Attributes 00:25:18.482 ========================== 00:25:18.482 Submission Queue Entry Size 00:25:18.482 Max: 64 00:25:18.482 Min: 64 00:25:18.482 Completion Queue Entry Size 00:25:18.482 Max: 16 00:25:18.482 Min: 16 00:25:18.482 Number of Namespaces: 1024 00:25:18.482 Compare Command: Not Supported 00:25:18.482 Write Uncorrectable Command: Not Supported 00:25:18.482 Dataset Management Command: Supported 00:25:18.482 Write Zeroes Command: Supported 00:25:18.482 Set Features Save Field: Not Supported 00:25:18.482 Reservations: Not Supported 00:25:18.482 Timestamp: Not Supported 00:25:18.482 Copy: Not Supported 00:25:18.482 Volatile Write Cache: Present 00:25:18.482 Atomic Write Unit (Normal): 1 00:25:18.482 Atomic Write Unit (PFail): 1 00:25:18.482 Atomic Compare & Write Unit: 1 00:25:18.482 Fused Compare & Write: Not Supported 00:25:18.482 Scatter-Gather List 00:25:18.482 SGL Command Set: Supported 00:25:18.482 SGL Keyed: Not Supported 00:25:18.482 SGL Bit Bucket Descriptor: Not Supported 00:25:18.482 SGL Metadata Pointer: Not Supported 00:25:18.482 Oversized SGL: Not Supported 00:25:18.482 SGL Metadata Address: Not Supported 00:25:18.482 SGL Offset: Supported 00:25:18.482 Transport SGL Data Block: Not Supported 00:25:18.482 Replay Protected Memory Block: Not Supported 00:25:18.482 00:25:18.482 Firmware Slot Information 00:25:18.482 ========================= 00:25:18.482 Active slot: 0 00:25:18.482 00:25:18.482 Asymmetric Namespace Access 00:25:18.482 =========================== 00:25:18.482 Change Count : 0 00:25:18.482 Number of ANA Group Descriptors : 1 00:25:18.482 ANA Group Descriptor : 0 00:25:18.482 ANA Group ID : 1 00:25:18.482 Number of NSID Values : 1 00:25:18.482 Change Count : 0 00:25:18.482 ANA State : 1 00:25:18.482 Namespace Identifier : 1 00:25:18.482 00:25:18.482 Commands Supported and Effects 00:25:18.482 ============================== 00:25:18.482 Admin Commands 00:25:18.482 -------------- 00:25:18.482 Get Log Page (02h): Supported 00:25:18.482 Identify (06h): Supported 00:25:18.482 Abort (08h): Supported 00:25:18.482 Set Features (09h): Supported 00:25:18.482 Get Features (0Ah): Supported 00:25:18.482 Asynchronous Event Request (0Ch): Supported 00:25:18.482 Keep Alive (18h): Supported 00:25:18.482 I/O Commands 00:25:18.482 ------------ 00:25:18.482 Flush (00h): Supported 00:25:18.482 Write (01h): Supported LBA-Change 00:25:18.482 Read (02h): Supported 00:25:18.482 Write Zeroes (08h): Supported LBA-Change 00:25:18.482 Dataset Management (09h): Supported 00:25:18.482 00:25:18.482 Error Log 00:25:18.482 ========= 00:25:18.482 Entry: 0 00:25:18.482 Error Count: 0x3 00:25:18.482 Submission Queue Id: 0x0 00:25:18.482 Command Id: 0x5 00:25:18.482 Phase Bit: 0 00:25:18.482 Status Code: 0x2 00:25:18.482 Status Code Type: 0x0 00:25:18.482 Do Not Retry: 1 00:25:18.482 Error Location: 0x28 00:25:18.482 LBA: 0x0 00:25:18.482 Namespace: 0x0 00:25:18.482 Vendor Log Page: 0x0 00:25:18.482 ----------- 00:25:18.482 Entry: 1 00:25:18.482 Error Count: 0x2 00:25:18.482 Submission Queue Id: 0x0 00:25:18.482 Command Id: 0x5 00:25:18.482 Phase Bit: 0 00:25:18.482 Status Code: 0x2 00:25:18.482 Status Code Type: 0x0 00:25:18.482 Do Not Retry: 1 00:25:18.482 Error Location: 0x28 00:25:18.482 LBA: 0x0 00:25:18.482 Namespace: 0x0 00:25:18.482 Vendor Log Page: 0x0 00:25:18.482 ----------- 00:25:18.482 Entry: 2 00:25:18.482 Error Count: 0x1 00:25:18.482 Submission Queue Id: 0x0 00:25:18.482 Command Id: 0x4 00:25:18.482 Phase Bit: 0 00:25:18.482 Status Code: 0x2 00:25:18.482 Status Code Type: 0x0 00:25:18.482 Do Not Retry: 1 00:25:18.482 Error Location: 0x28 00:25:18.482 LBA: 0x0 00:25:18.482 Namespace: 0x0 00:25:18.482 Vendor Log Page: 0x0 00:25:18.482 00:25:18.482 Number of Queues 00:25:18.482 ================ 00:25:18.482 Number of I/O Submission Queues: 128 00:25:18.482 Number of I/O Completion Queues: 128 00:25:18.482 00:25:18.482 ZNS Specific Controller Data 00:25:18.482 ============================ 00:25:18.482 Zone Append Size Limit: 0 00:25:18.482 00:25:18.482 00:25:18.482 Active Namespaces 00:25:18.482 ================= 00:25:18.482 get_feature(0x05) failed 00:25:18.482 Namespace ID:1 00:25:18.482 Command Set Identifier: NVM (00h) 00:25:18.482 Deallocate: Supported 00:25:18.482 Deallocated/Unwritten Error: Not Supported 00:25:18.482 Deallocated Read Value: Unknown 00:25:18.482 Deallocate in Write Zeroes: Not Supported 00:25:18.482 Deallocated Guard Field: 0xFFFF 00:25:18.482 Flush: Supported 00:25:18.482 Reservation: Not Supported 00:25:18.482 Namespace Sharing Capabilities: Multiple Controllers 00:25:18.482 Size (in LBAs): 1953525168 (931GiB) 00:25:18.482 Capacity (in LBAs): 1953525168 (931GiB) 00:25:18.482 Utilization (in LBAs): 1953525168 (931GiB) 00:25:18.482 UUID: 8fe61841-d1fe-416c-9cda-b500407d412c 00:25:18.482 Thin Provisioning: Not Supported 00:25:18.482 Per-NS Atomic Units: Yes 00:25:18.482 Atomic Boundary Size (Normal): 0 00:25:18.482 Atomic Boundary Size (PFail): 0 00:25:18.482 Atomic Boundary Offset: 0 00:25:18.482 NGUID/EUI64 Never Reused: No 00:25:18.482 ANA group ID: 1 00:25:18.482 Namespace Write Protected: No 00:25:18.482 Number of LBA Formats: 1 00:25:18.482 Current LBA Format: LBA Format #00 00:25:18.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:18.482 00:25:18.482 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:18.482 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.482 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.743 rmmod nvme_tcp 00:25:18.743 rmmod nvme_fabrics 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.743 14:52:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:20.648 14:52:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:23.187 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.187 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:23.188 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:24.124 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:24.384 00:25:24.384 real 0m15.252s 00:25:24.384 user 0m3.699s 00:25:24.384 sys 0m7.870s 00:25:24.384 14:52:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.384 14:52:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.384 ************************************ 00:25:24.384 END TEST nvmf_identify_kernel_target 00:25:24.384 ************************************ 00:25:24.384 14:52:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:24.384 14:52:44 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:24.384 14:52:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.384 14:52:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.384 14:52:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.384 ************************************ 00:25:24.384 START TEST nvmf_auth_host 00:25:24.384 ************************************ 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:24.384 * Looking for test storage... 00:25:24.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:24.384 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.385 14:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:29.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:29.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:29.665 Found net devices under 0000:86:00.0: cvl_0_0 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:29.665 Found net devices under 0000:86:00.1: cvl_0_1 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:29.665 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.926 14:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:25:29.926 00:25:29.926 --- 10.0.0.2 ping statistics --- 00:25:29.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.926 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:29.926 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:25:30.186 00:25:30.186 --- 10.0.0.1 ping statistics --- 00:25:30.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.186 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2457163 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2457163 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2457163 ']' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.186 14:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a0ad1fcb9a55aa2476a4067eef336791 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HRn 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a0ad1fcb9a55aa2476a4067eef336791 0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a0ad1fcb9a55aa2476a4067eef336791 0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a0ad1fcb9a55aa2476a4067eef336791 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HRn 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HRn 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HRn 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dec8c5884268dff722d846c26461ca5f90a67945b321a8d693db8269689d471e 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XDq 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dec8c5884268dff722d846c26461ca5f90a67945b321a8d693db8269689d471e 3 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dec8c5884268dff722d846c26461ca5f90a67945b321a8d693db8269689d471e 3 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dec8c5884268dff722d846c26461ca5f90a67945b321a8d693db8269689d471e 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XDq 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XDq 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XDq 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e1f9b0fb50a174faa6be7ae05c9d22c0593727d750f487b8 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dfI 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e1f9b0fb50a174faa6be7ae05c9d22c0593727d750f487b8 0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e1f9b0fb50a174faa6be7ae05c9d22c0593727d750f487b8 0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e1f9b0fb50a174faa6be7ae05c9d22c0593727d750f487b8 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dfI 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dfI 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dfI 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e3cfd522b70b5e84725035ce707fda56012a3ef85e6d870 00:25:31.165 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xFq 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e3cfd522b70b5e84725035ce707fda56012a3ef85e6d870 2 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e3cfd522b70b5e84725035ce707fda56012a3ef85e6d870 2 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e3cfd522b70b5e84725035ce707fda56012a3ef85e6d870 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xFq 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xFq 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xFq 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20197338ac3b39905de58bde1c354984 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LW5 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20197338ac3b39905de58bde1c354984 1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20197338ac3b39905de58bde1c354984 1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20197338ac3b39905de58bde1c354984 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LW5 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LW5 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LW5 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c2b904397d47c6ef301214cebf9ab8b 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AIO 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c2b904397d47c6ef301214cebf9ab8b 1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c2b904397d47c6ef301214cebf9ab8b 1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c2b904397d47c6ef301214cebf9ab8b 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:31.166 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AIO 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AIO 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.AIO 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef13f44e833535e195306bf5c55776dc5fbe9702bfd851cf 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EnL 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef13f44e833535e195306bf5c55776dc5fbe9702bfd851cf 2 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef13f44e833535e195306bf5c55776dc5fbe9702bfd851cf 2 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef13f44e833535e195306bf5c55776dc5fbe9702bfd851cf 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EnL 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EnL 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EnL 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a5b70ccd7998c523d3cc2c6d82a26ec 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DhD 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a5b70ccd7998c523d3cc2c6d82a26ec 0 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a5b70ccd7998c523d3cc2c6d82a26ec 0 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a5b70ccd7998c523d3cc2c6d82a26ec 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DhD 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DhD 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DhD 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c86e4edccb3d9ba3d1734f5feb7817afe8377b27e2807bc9f09667c72d831e53 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5pL 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c86e4edccb3d9ba3d1734f5feb7817afe8377b27e2807bc9f09667c72d831e53 3 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c86e4edccb3d9ba3d1734f5feb7817afe8377b27e2807bc9f09667c72d831e53 3 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c86e4edccb3d9ba3d1734f5feb7817afe8377b27e2807bc9f09667c72d831e53 00:25:31.426 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5pL 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5pL 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5pL 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2457163 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2457163 ']' 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.427 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HRn 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XDq ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XDq 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dfI 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xFq ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xFq 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LW5 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.AIO ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AIO 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EnL 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DhD ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DhD 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5pL 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:31.687 14:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:34.227 Waiting for block devices as requested 00:25:34.227 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:34.487 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:34.487 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:34.487 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:34.747 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:34.747 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:34.747 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:34.747 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:35.006 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:35.006 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:35.006 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:35.266 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:35.266 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:35.266 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:35.266 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:35.526 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:35.526 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:36.095 No valid GPT data, bailing 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:36.095 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:36.095 00:25:36.095 Discovery Log Number of Records 2, Generation counter 2 00:25:36.095 =====Discovery Log Entry 0====== 00:25:36.095 trtype: tcp 00:25:36.095 adrfam: ipv4 00:25:36.095 subtype: current discovery subsystem 00:25:36.096 treq: not specified, sq flow control disable supported 00:25:36.096 portid: 1 00:25:36.096 trsvcid: 4420 00:25:36.096 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:36.096 traddr: 10.0.0.1 00:25:36.096 eflags: none 00:25:36.096 sectype: none 00:25:36.096 =====Discovery Log Entry 1====== 00:25:36.096 trtype: tcp 00:25:36.096 adrfam: ipv4 00:25:36.096 subtype: nvme subsystem 00:25:36.096 treq: not specified, sq flow control disable supported 00:25:36.096 portid: 1 00:25:36.096 trsvcid: 4420 00:25:36.096 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:36.096 traddr: 10.0.0.1 00:25:36.096 eflags: none 00:25:36.096 sectype: none 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.096 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.356 nvme0n1 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.356 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 nvme0n1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 nvme0n1 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 14:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 nvme0n1 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.877 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.137 nvme0n1 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.137 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.138 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.138 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.138 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.398 nvme0n1 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.398 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 nvme0n1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 nvme0n1 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.919 14:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 nvme0n1 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.919 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.920 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.180 nvme0n1 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.180 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.181 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.441 nvme0n1 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.441 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.702 nvme0n1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.702 14:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 nvme0n1 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.962 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.222 nvme0n1 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.222 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.482 nvme0n1 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.482 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.741 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.742 14:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 nvme0n1 00:25:39.742 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.002 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.262 nvme0n1 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.262 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.263 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.832 nvme0n1 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.832 14:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.093 nvme0n1 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.093 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.663 nvme0n1 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.663 14:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.923 nvme0n1 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.923 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.492 nvme0n1 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.492 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.752 14:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 nvme0n1 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.320 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.889 nvme0n1 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.889 14:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.889 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.890 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.459 nvme0n1 00:25:44.459 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.459 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.459 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.459 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.459 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.460 14:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.028 nvme0n1 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.029 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.288 nvme0n1 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.288 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.289 nvme0n1 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.289 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.548 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.549 nvme0n1 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.549 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.808 nvme0n1 00:25:45.808 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.808 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.808 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.808 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.808 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.809 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.809 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.809 14:53:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.809 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.809 14:53:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.809 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.068 nvme0n1 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.068 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 nvme0n1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 nvme0n1 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.587 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.588 nvme0n1 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.588 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.847 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 nvme0n1 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 nvme0n1 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.107 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 nvme0n1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.627 nvme0n1 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.627 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.915 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.915 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.915 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.916 14:53:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.916 nvme0n1 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.916 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.177 nvme0n1 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.177 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.437 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.438 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.698 nvme0n1 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.698 14:53:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.959 nvme0n1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.959 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.528 nvme0n1 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.528 14:53:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.787 nvme0n1 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:49.787 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.788 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.047 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.306 nvme0n1 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.306 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.875 nvme0n1 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.875 14:53:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 nvme0n1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.445 14:53:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 nvme0n1 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.015 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.586 nvme0n1 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.586 14:53:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.155 nvme0n1 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.155 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.156 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.415 14:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 nvme0n1 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 nvme0n1 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:53.984 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:53.985 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.244 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.245 nvme0n1 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.245 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 nvme0n1 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.505 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.765 nvme0n1 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.765 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.766 14:53:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.766 nvme0n1 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.766 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 nvme0n1 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:55.026 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.027 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.286 nvme0n1 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:55.286 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.287 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.546 nvme0n1 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.546 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.547 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.806 nvme0n1 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.806 14:53:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.806 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.806 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.806 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.807 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.067 nvme0n1 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.328 nvme0n1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.328 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.588 nvme0n1 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.588 14:53:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 nvme0n1 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.847 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 nvme0n1 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.364 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.365 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.623 nvme0n1 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.623 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 14:53:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 nvme0n1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.451 nvme0n1 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.451 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.711 nvme0n1 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:58.711 14:53:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.711 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.971 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.230 nvme0n1 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:59.230 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.231 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.800 nvme0n1 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTBhZDFmY2I5YTU1YWEyNDc2YTQwNjdlZWYzMzY3OTE9dJas: 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGVjOGM1ODg0MjY4ZGZmNzIyZDg0NmMyNjQ2MWNhNWY5MGE2Nzk0NWIzMjFhOGQ2OTNkYjgyNjk2ODlkNDcxZceIj6o=: 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.800 14:53:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.369 nvme0n1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.369 14:53:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.939 nvme0n1 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjAxOTczMzhhYzNiMzk5MDVkZTU4YmRlMWMzNTQ5ODS3K+nw: 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWMyYjkwNDM5N2Q0N2M2ZWYzMDEyMTRjZWJmOWFiOGJrk5eu: 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.939 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.540 nvme0n1 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWYxM2Y0NGU4MzM1MzVlMTk1MzA2YmY1YzU1Nzc2ZGM1ZmJlOTcwMmJmZDg1MWNmM0LBVA==: 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1YjcwY2NkNzk5OGM1MjNkM2NjMmM2ZDgyYTI2ZWPBZtnv: 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.540 14:53:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.110 nvme0n1 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yzg2ZTRlZGNjYjNkOWJhM2QxNzM0ZjVmZWI3ODE3YWZlODM3N2IyN2UyODA3YmM5ZjA5NjY3YzcyZDgzMWU1M9nzzJU=: 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.110 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 nvme0n1 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmOWIwZmI1MGExNzRmYWE2YmU3YWUwNWM5ZDIyYzA1OTM3MjdkNzUwZjQ4N2I4gw/Iag==: 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUzY2ZkNTIyYjcwYjVlODQ3MjUwMzVjZTcwN2ZkYTU2MDEyYTNlZjg1ZTZkODcwldnukw==: 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.679 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.939 request: 00:26:02.939 { 00:26:02.939 "name": "nvme0", 00:26:02.939 "trtype": "tcp", 00:26:02.939 "traddr": "10.0.0.1", 00:26:02.939 "adrfam": "ipv4", 00:26:02.939 "trsvcid": "4420", 00:26:02.939 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:02.939 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:02.939 "prchk_reftag": false, 00:26:02.939 "prchk_guard": false, 00:26:02.939 "hdgst": false, 00:26:02.939 "ddgst": false, 00:26:02.939 "method": "bdev_nvme_attach_controller", 00:26:02.939 "req_id": 1 00:26:02.939 } 00:26:02.939 Got JSON-RPC error response 00:26:02.939 response: 00:26:02.939 { 00:26:02.939 "code": -5, 00:26:02.939 "message": "Input/output error" 00:26:02.939 } 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.939 14:53:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.939 request: 00:26:02.939 { 00:26:02.939 "name": "nvme0", 00:26:02.939 "trtype": "tcp", 00:26:02.939 "traddr": "10.0.0.1", 00:26:02.939 "adrfam": "ipv4", 00:26:02.939 "trsvcid": "4420", 00:26:02.939 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:02.939 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:02.939 "prchk_reftag": false, 00:26:02.939 "prchk_guard": false, 00:26:02.939 "hdgst": false, 00:26:02.939 "ddgst": false, 00:26:02.939 "dhchap_key": "key2", 00:26:02.939 "method": "bdev_nvme_attach_controller", 00:26:02.939 "req_id": 1 00:26:02.939 } 00:26:02.939 Got JSON-RPC error response 00:26:02.939 response: 00:26:02.939 { 00:26:02.939 "code": -5, 00:26:02.939 "message": "Input/output error" 00:26:02.939 } 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:02.939 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.940 request: 00:26:02.940 { 00:26:02.940 "name": "nvme0", 00:26:02.940 "trtype": "tcp", 00:26:02.940 "traddr": "10.0.0.1", 00:26:02.940 "adrfam": "ipv4", 00:26:02.940 "trsvcid": "4420", 00:26:02.940 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:02.940 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:02.940 "prchk_reftag": false, 00:26:02.940 "prchk_guard": false, 00:26:02.940 "hdgst": false, 00:26:02.940 "ddgst": false, 00:26:02.940 "dhchap_key": "key1", 00:26:02.940 "dhchap_ctrlr_key": "ckey2", 00:26:02.940 "method": "bdev_nvme_attach_controller", 00:26:02.940 "req_id": 1 00:26:02.940 } 00:26:02.940 Got JSON-RPC error response 00:26:02.940 response: 00:26:02.940 { 00:26:02.940 "code": -5, 00:26:02.940 "message": "Input/output error" 00:26:02.940 } 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:02.940 rmmod nvme_tcp 00:26:02.940 rmmod nvme_fabrics 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2457163 ']' 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2457163 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2457163 ']' 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2457163 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.940 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457163 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457163' 00:26:03.200 killing process with pid 2457163 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2457163 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2457163 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.200 14:53:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:05.739 14:53:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:08.278 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:08.278 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:08.847 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:09.107 14:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HRn /tmp/spdk.key-null.dfI /tmp/spdk.key-sha256.LW5 /tmp/spdk.key-sha384.EnL /tmp/spdk.key-sha512.5pL /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:09.107 14:53:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:11.647 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:11.647 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:11.647 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:11.647 00:26:11.647 real 0m47.304s 00:26:11.647 user 0m41.848s 00:26:11.647 sys 0m11.759s 00:26:11.647 14:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:11.647 14:53:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.647 ************************************ 00:26:11.647 END TEST nvmf_auth_host 00:26:11.647 ************************************ 00:26:11.647 14:53:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:11.647 14:53:31 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:11.647 14:53:31 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:11.647 14:53:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:11.647 14:53:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.647 14:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.647 ************************************ 00:26:11.647 START TEST nvmf_digest 00:26:11.647 ************************************ 00:26:11.647 14:53:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:11.908 * Looking for test storage... 00:26:11.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.908 14:53:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.908 14:53:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.190 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:17.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:17.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:17.191 Found net devices under 0000:86:00.0: cvl_0_0 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:17.191 Found net devices under 0000:86:00.1: cvl_0_1 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.191 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:26:17.451 00:26:17.451 --- 10.0.0.2 ping statistics --- 00:26:17.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.451 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:26:17.451 00:26:17.451 --- 10.0.0.1 ping statistics --- 00:26:17.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.451 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:17.451 ************************************ 00:26:17.451 START TEST nvmf_digest_clean 00:26:17.451 ************************************ 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2469853 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2469853 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2469853 ']' 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.451 14:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:17.451 [2024-07-25 14:53:37.700997] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:17.451 [2024-07-25 14:53:37.701041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.451 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.711 [2024-07-25 14:53:37.756068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.711 [2024-07-25 14:53:37.834351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.711 [2024-07-25 14:53:37.834386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.711 [2024-07-25 14:53:37.834394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.711 [2024-07-25 14:53:37.834400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.711 [2024-07-25 14:53:37.834405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.711 [2024-07-25 14:53:37.834422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.280 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.540 null0 00:26:18.540 [2024-07-25 14:53:38.634339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.540 [2024-07-25 14:53:38.658512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2470090 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2470090 /var/tmp/bperf.sock 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2470090 ']' 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.540 14:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.540 [2024-07-25 14:53:38.708310] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:18.540 [2024-07-25 14:53:38.708356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470090 ] 00:26:18.540 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.540 [2024-07-25 14:53:38.762362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.800 [2024-07-25 14:53:38.843266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.369 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.369 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:19.369 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:19.369 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:19.369 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.628 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.628 14:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.887 nvme0n1 00:26:19.887 14:53:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.887 14:53:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.887 Running I/O for 2 seconds... 00:26:22.423 00:26:22.423 Latency(us) 00:26:22.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.423 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:22.423 nvme0n1 : 2.00 26306.42 102.76 0.00 0.00 4860.17 2421.98 28835.84 00:26:22.423 =================================================================================================================== 00:26:22.423 Total : 26306.42 102.76 0.00 0.00 4860.17 2421.98 28835.84 00:26:22.423 0 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:22.423 | select(.opcode=="crc32c") 00:26:22.423 | "\(.module_name) \(.executed)"' 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2470090 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2470090 ']' 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2470090 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2470090 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2470090' 00:26:22.423 killing process with pid 2470090 00:26:22.423 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2470090 00:26:22.423 Received shutdown signal, test time was about 2.000000 seconds 00:26:22.423 00:26:22.423 Latency(us) 00:26:22.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.423 =================================================================================================================== 00:26:22.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2470090 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2470782 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2470782 /var/tmp/bperf.sock 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2470782 ']' 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.424 14:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:22.424 [2024-07-25 14:53:42.564131] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:22.424 [2024-07-25 14:53:42.564183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470782 ] 00:26:22.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.424 Zero copy mechanism will not be used. 00:26:22.424 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.424 [2024-07-25 14:53:42.617340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.424 [2024-07-25 14:53:42.684926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.414 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.983 nvme0n1 00:26:23.983 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:23.983 14:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.983 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.983 Zero copy mechanism will not be used. 00:26:23.983 Running I/O for 2 seconds... 00:26:25.890 00:26:25.890 Latency(us) 00:26:25.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.890 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:25.890 nvme0n1 : 2.00 2143.53 267.94 0.00 0.00 7461.87 6810.05 20401.64 00:26:25.890 =================================================================================================================== 00:26:25.890 Total : 2143.53 267.94 0.00 0.00 7461.87 6810.05 20401.64 00:26:25.890 0 00:26:25.890 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:25.890 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:25.890 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:25.890 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:25.890 | select(.opcode=="crc32c") 00:26:25.890 | "\(.module_name) \(.executed)"' 00:26:25.890 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:26.149 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:26.149 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:26.149 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:26.149 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:26.149 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2470782 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2470782 ']' 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2470782 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2470782 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2470782' 00:26:26.150 killing process with pid 2470782 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2470782 00:26:26.150 Received shutdown signal, test time was about 2.000000 seconds 00:26:26.150 00:26:26.150 Latency(us) 00:26:26.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.150 =================================================================================================================== 00:26:26.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.150 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2470782 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2471413 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2471413 /var/tmp/bperf.sock 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2471413 ']' 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.410 14:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:26.410 [2024-07-25 14:53:46.535628] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:26.410 [2024-07-25 14:53:46.535675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471413 ] 00:26:26.410 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.410 [2024-07-25 14:53:46.589989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.410 [2024-07-25 14:53:46.669248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.348 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.607 nvme0n1 00:26:27.607 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:27.607 14:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.866 Running I/O for 2 seconds... 00:26:29.771 00:26:29.771 Latency(us) 00:26:29.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.772 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:29.772 nvme0n1 : 2.00 26660.89 104.14 0.00 0.00 4794.34 3419.27 37384.01 00:26:29.772 =================================================================================================================== 00:26:29.772 Total : 26660.89 104.14 0.00 0.00 4794.34 3419.27 37384.01 00:26:29.772 0 00:26:29.772 14:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:29.772 14:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:29.772 14:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:29.772 14:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:29.772 | select(.opcode=="crc32c") 00:26:29.772 | "\(.module_name) \(.executed)"' 00:26:29.772 14:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2471413 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2471413 ']' 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2471413 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:30.031 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2471413 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2471413' 00:26:30.032 killing process with pid 2471413 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2471413 00:26:30.032 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.032 00:26:30.032 Latency(us) 00:26:30.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.032 =================================================================================================================== 00:26:30.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.032 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2471413 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2471964 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2471964 /var/tmp/bperf.sock 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2471964 ']' 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:30.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.291 14:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.291 [2024-07-25 14:53:50.424604] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:30.291 [2024-07-25 14:53:50.424652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471964 ] 00:26:30.291 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:30.291 Zero copy mechanism will not be used. 00:26:30.291 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.291 [2024-07-25 14:53:50.477612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.291 [2024-07-25 14:53:50.557240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.228 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.794 nvme0n1 00:26:31.794 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:31.794 14:53:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.794 Zero copy mechanism will not be used. 00:26:31.794 Running I/O for 2 seconds... 00:26:33.699 00:26:33.700 Latency(us) 00:26:33.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:33.700 nvme0n1 : 2.01 1363.12 170.39 0.00 0.00 11702.25 8719.14 37611.97 00:26:33.700 =================================================================================================================== 00:26:33.700 Total : 1363.12 170.39 0.00 0.00 11702.25 8719.14 37611.97 00:26:33.700 0 00:26:33.700 14:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:33.700 14:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:33.700 14:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:33.700 14:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:33.700 | select(.opcode=="crc32c") 00:26:33.700 | "\(.module_name) \(.executed)"' 00:26:33.700 14:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2471964 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2471964 ']' 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2471964 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2471964 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2471964' 00:26:33.960 killing process with pid 2471964 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2471964 00:26:33.960 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.960 00:26:33.960 Latency(us) 00:26:33.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.960 =================================================================================================================== 00:26:33.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.960 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2471964 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2469853 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2469853 ']' 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2469853 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2469853 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2469853' 00:26:34.219 killing process with pid 2469853 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2469853 00:26:34.219 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2469853 00:26:34.479 00:26:34.479 real 0m16.969s 00:26:34.479 user 0m33.741s 00:26:34.479 sys 0m3.256s 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 ************************************ 00:26:34.479 END TEST nvmf_digest_clean 00:26:34.479 ************************************ 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 ************************************ 00:26:34.479 START TEST nvmf_digest_error 00:26:34.479 ************************************ 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2472688 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2472688 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2472688 ']' 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 14:53:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:34.479 [2024-07-25 14:53:54.735336] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:34.479 [2024-07-25 14:53:54.735378] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.479 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.738 [2024-07-25 14:53:54.791657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.738 [2024-07-25 14:53:54.870320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.738 [2024-07-25 14:53:54.870353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.738 [2024-07-25 14:53:54.870360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.738 [2024-07-25 14:53:54.870366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.738 [2024-07-25 14:53:54.870372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.738 [2024-07-25 14:53:54.870411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.307 [2024-07-25 14:53:55.560414] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.307 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.566 null0 00:26:35.566 [2024-07-25 14:53:55.650006] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.566 [2024-07-25 14:53:55.674180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:35.566 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2472930 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2472930 /var/tmp/bperf.sock 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2472930 ']' 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.567 14:53:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.567 [2024-07-25 14:53:55.709476] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:35.567 [2024-07-25 14:53:55.709516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472930 ] 00:26:35.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.567 [2024-07-25 14:53:55.762066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.567 [2024-07-25 14:53:55.834410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.505 14:53:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.075 nvme0n1 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:37.075 14:53:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.075 Running I/O for 2 seconds... 00:26:37.075 [2024-07-25 14:53:57.194294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.194325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.194336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.206321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.206345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.206354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.216517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.216537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.226857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.226877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.226885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.237265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.237285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.237294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.247166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.247186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.247194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.259287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.259311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.259319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.271454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.271474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.271481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.279942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.279961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.279969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.296219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.296239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.296247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.304556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.304576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.304584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.314631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.314651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.314658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.323403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.323422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.323430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.334586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.334604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.334612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.344287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.344307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.344314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.353814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.353842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.075 [2024-07-25 14:53:57.362855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.075 [2024-07-25 14:53:57.362875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.075 [2024-07-25 14:53:57.362883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.335 [2024-07-25 14:53:57.372853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.335 [2024-07-25 14:53:57.372872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.335 [2024-07-25 14:53:57.372880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.335 [2024-07-25 14:53:57.382451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.335 [2024-07-25 14:53:57.382471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.392279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.392299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.392306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.400947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.400966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.400974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.411023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.411047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.420135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.420155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.420163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.430238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.430260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.443989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.444008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.444016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.455467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.455486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.455494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.464812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.464832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.464839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.479797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.479817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.479824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.490802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.490822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.490829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.503419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.503438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.503446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.514838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.514858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.514865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.523474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.523493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.523501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.533480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.533508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.542868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.542889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.542897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.551901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.551928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.567178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.567197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.567204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.577593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.577620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.586827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.586847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.586854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.596357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.596376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.606464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.606484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.606492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.336 [2024-07-25 14:53:57.615268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.336 [2024-07-25 14:53:57.615287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.336 [2024-07-25 14:53:57.615298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.629766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.629787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.629796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.638980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.639000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.639008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.649223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.649243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.649251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.658160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.658181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.658189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.667544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.667565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.667573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.676539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.676560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.676568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.686590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.686611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.686618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.697068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.697089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.697098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.707427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.707451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.707459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.716260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.716280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.716288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.728463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.728484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.728491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.737843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.737863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.737871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.749909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.749930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.749937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.759071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.759091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.759099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.769372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.769392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.769400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.777940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.777960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.777968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.788332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.788353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.788360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.797459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.797480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.797487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.807543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.597 [2024-07-25 14:53:57.807563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.597 [2024-07-25 14:53:57.807571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.597 [2024-07-25 14:53:57.816254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.816274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.816282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.825671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.825690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.825698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.835498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.835517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.835524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.844794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.844814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.844822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.854104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.854124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.854131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.863205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.863225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.863233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.873070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.873090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.873102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.598 [2024-07-25 14:53:57.882677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.598 [2024-07-25 14:53:57.882697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.598 [2024-07-25 14:53:57.882705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.891883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.891904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.891912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.903218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.903239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.903247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.912793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.912813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.912821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.921192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.921212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.921220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.930833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.930853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.930861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.940564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.940584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.940592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.950022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.950049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.950057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.959500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.959527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.969190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.969210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.969218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.977907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.977927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.977935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.987753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.987775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.987785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:57.997637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:57.997657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:57.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:58.006849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.858 [2024-07-25 14:53:58.006869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.858 [2024-07-25 14:53:58.006876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.858 [2024-07-25 14:53:58.016344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.016364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.016371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.025500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.025520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.025528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.035347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.035367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.044920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.044940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.044947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.054094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.054114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.054122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.064664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.064684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.064692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.078363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.078382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.088230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.088249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.088257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.097536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.097555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.097563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.106382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.106401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.106409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.116679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.116706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.125860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.125883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.135015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.135034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.135047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.859 [2024-07-25 14:53:58.146503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:37.859 [2024-07-25 14:53:58.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.859 [2024-07-25 14:53:58.146530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.119 [2024-07-25 14:53:58.157424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.119 [2024-07-25 14:53:58.157444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.119 [2024-07-25 14:53:58.157452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.119 [2024-07-25 14:53:58.166336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.119 [2024-07-25 14:53:58.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.119 [2024-07-25 14:53:58.166363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.119 [2024-07-25 14:53:58.175856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.119 [2024-07-25 14:53:58.175877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.119 [2024-07-25 14:53:58.175885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.119 [2024-07-25 14:53:58.184085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.119 [2024-07-25 14:53:58.184105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.119 [2024-07-25 14:53:58.184112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.119 [2024-07-25 14:53:58.194888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.119 [2024-07-25 14:53:58.194907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.119 [2024-07-25 14:53:58.194915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.204328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.204348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.204356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.214247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.214267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.214276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.223022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.223041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.223056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.233492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.233520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.243477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.243496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.243504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.252434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.252452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.252460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.261845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.261864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.261871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.271183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.271202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.271210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.280590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.280616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.289097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.289116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.289127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.299115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.299135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.299142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.308574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.308594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.308602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.318447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.318466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.327426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.327445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.327452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.337054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.337073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.337081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.346469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.346489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.346496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.356028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.356051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.356059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.365383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.365402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.365410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.375561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.375584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.375591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.384704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.384723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.384730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.392772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.392791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.392799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.120 [2024-07-25 14:53:58.403312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.120 [2024-07-25 14:53:58.403331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.120 [2024-07-25 14:53:58.403339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.412718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.412739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.423099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.423119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.423127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.432078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.432097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.432104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.442017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.442035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.442048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.450809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.450828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.450839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.461175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.461194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.461201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.469725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.469745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.469752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.479531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.479551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.479559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.489868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.489889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.380 [2024-07-25 14:53:58.489897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.380 [2024-07-25 14:53:58.498598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.380 [2024-07-25 14:53:58.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.498626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.509180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.509200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.509208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.518387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.518414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.527981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.528000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.528007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.536295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.536317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.536324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.545725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.545744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.545752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.555883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.555903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.555910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.565395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.565414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.565422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.574501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.574521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.574528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.584258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.584277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.584284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.593523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.593541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.593549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.602802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.612428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.612447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.612455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.621596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.621616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.621623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.631270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.631289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.641133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.641152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.641159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.649643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.649662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.649670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.659586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.659605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.659613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.381 [2024-07-25 14:53:58.668169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.381 [2024-07-25 14:53:58.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.381 [2024-07-25 14:53:58.668195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.678543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.678563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.678571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.687568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.687594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.697197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.697216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.697228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.706484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.706504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.716257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.716276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.716284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.725375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.725395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.725402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.735666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.735685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.735694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.745102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.745121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.745129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.755110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.755129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.763441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.643 [2024-07-25 14:53:58.763468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.643 [2024-07-25 14:53:58.774542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.643 [2024-07-25 14:53:58.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.783211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.783234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.783241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.792765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.792784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.792792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.802049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.802068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.802076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.811726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.811745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.811752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.821617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.821637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.821644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.830243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.830269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.839825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.839843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.839851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.849304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.849324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.859358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.859378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.859389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.868259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.868278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.868286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.878247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.878265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.878273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.886560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.886578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.886586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.896073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.896092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.896099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.906077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.906096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.906103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.914652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.914671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.914679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.644 [2024-07-25 14:53:58.925192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.644 [2024-07-25 14:53:58.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.644 [2024-07-25 14:53:58.925218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.933888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.933916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.943818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.943841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.943848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.953837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.953864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.963842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.963861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.963868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.972185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.972204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.972211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.982394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.982413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.982421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:58.991066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:58.991085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:58.991093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:59.001591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:59.001611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:59.001619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:59.010642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:59.010662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:59.010669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:59.020695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:59.020715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:59.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:59.029670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:59.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:59.029696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.912 [2024-07-25 14:53:59.038923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.912 [2024-07-25 14:53:59.038942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.912 [2024-07-25 14:53:59.038949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.048512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.048531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.048539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.057845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.057864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.057871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.068088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.068108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.068115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.077359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.077379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.087011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.087032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.087040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.096464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.096484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.096492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.105927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.105947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.105958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.115387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.115408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.115416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.124856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.124876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.124884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.134373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.134401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.143729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.143756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.152782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.152802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 [2024-07-25 14:53:59.162461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e48fb0) 00:26:38.913 [2024-07-25 14:53:59.162480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.913 [2024-07-25 14:53:59.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.913 00:26:38.913 Latency(us) 00:26:38.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:38.913 nvme0n1 : 2.00 25835.92 100.92 0.00 0.00 4948.79 2393.49 23478.98 00:26:38.913 =================================================================================================================== 00:26:38.913 Total : 25835.92 100.92 0.00 0.00 4948.79 2393.49 23478.98 00:26:38.913 0 00:26:38.913 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.913 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.913 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.913 | .driver_specific 00:26:38.913 | .nvme_error 00:26:38.913 | .status_code 00:26:38.913 | .command_transient_transport_error' 00:26:38.913 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2472930 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2472930 ']' 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2472930 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:39.173 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2472930 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2472930' 00:26:39.174 killing process with pid 2472930 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2472930 00:26:39.174 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.174 00:26:39.174 Latency(us) 00:26:39.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.174 =================================================================================================================== 00:26:39.174 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.174 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2472930 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2473626 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2473626 /var/tmp/bperf.sock 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2473626 ']' 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.434 14:53:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.434 [2024-07-25 14:53:59.630448] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:39.434 [2024-07-25 14:53:59.630495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473626 ] 00:26:39.434 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.434 Zero copy mechanism will not be used. 00:26:39.434 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.434 [2024-07-25 14:53:59.682813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.694 [2024-07-25 14:53:59.762132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.263 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.263 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:40.263 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.263 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.522 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.782 nvme0n1 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.782 14:54:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.782 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.782 Zero copy mechanism will not be used. 00:26:40.782 Running I/O for 2 seconds... 00:26:40.782 [2024-07-25 14:54:01.026579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:40.782 [2024-07-25 14:54:01.026611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 14:54:01.026621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 14:54:01.041834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:40.782 [2024-07-25 14:54:01.041860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 14:54:01.041870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 14:54:01.055902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:40.782 [2024-07-25 14:54:01.055924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 14:54:01.055933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 14:54:01.070165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:40.782 [2024-07-25 14:54:01.070185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 14:54:01.070194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.084813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.084833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.042 [2024-07-25 14:54:01.084841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.099259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.099280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.042 [2024-07-25 14:54:01.099288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.113838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.113858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.042 [2024-07-25 14:54:01.113865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.128012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.128032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.042 [2024-07-25 14:54:01.128040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.142498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.042 [2024-07-25 14:54:01.142526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.042 [2024-07-25 14:54:01.156810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.042 [2024-07-25 14:54:01.156830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.156837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.171309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.171328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.171335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.185681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.185701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.185712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.199693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.199713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.199721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.214078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.214098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.214106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.228583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.228602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.228610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.242823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.242842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.242850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.257345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.257364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.257372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.271311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.271331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.271338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.285507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.285527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.285535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.299943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.299962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.299970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.314119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.314142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.043 [2024-07-25 14:54:01.328271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.043 [2024-07-25 14:54:01.328290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.043 [2024-07-25 14:54:01.328299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.342769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.342789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.342797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.357041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.357065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.357073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.371519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.371538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.371546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.385696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.385722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.400513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.400539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.415977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.431182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.431201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.431208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.446879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.446900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.446908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.461068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.461088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.461096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.475172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.475192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.475200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.303 [2024-07-25 14:54:01.489283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.303 [2024-07-25 14:54:01.489313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.303 [2024-07-25 14:54:01.489321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.514344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.514364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.514372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.530594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.530613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.530620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.545057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.545076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.545084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.559484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.559504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.559512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.574052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.574075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.574084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.304 [2024-07-25 14:54:01.588296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.304 [2024-07-25 14:54:01.588315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.304 [2024-07-25 14:54:01.588323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.602739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.602759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.602767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.617948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.617968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.617976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.633632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.633654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.633662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.649222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.649243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.649251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.665119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.665138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.665146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.681447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.681468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.681476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.696929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.696949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.696957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.712088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.712109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.712117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.728092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.728112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.728120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.743469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.743488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.743496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.758906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.758933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.773688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.773716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.788198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.788217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.788225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.804041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.564 [2024-07-25 14:54:01.804065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.564 [2024-07-25 14:54:01.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.564 [2024-07-25 14:54:01.828757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.565 [2024-07-25 14:54:01.828776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.565 [2024-07-25 14:54:01.828784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.565 [2024-07-25 14:54:01.844520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.565 [2024-07-25 14:54:01.844541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.565 [2024-07-25 14:54:01.844552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.858820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.858841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.858849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.873451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.873471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.873479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.887685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.887705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.909256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.909275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.909283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.927475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.927495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.927503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.941959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.824 [2024-07-25 14:54:01.941981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.824 [2024-07-25 14:54:01.941989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.824 [2024-07-25 14:54:01.956369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:01.956390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:01.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:01.970834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:01.970855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:01.970863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:01.985295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:01.985318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:01.985326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:01.999776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:01.999797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:01.999804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.013839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.013859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.013867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.028427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.028448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.028456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.043189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.043210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.043219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.057581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.057610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.072300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.072329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.086907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.086927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.086935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.101157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.101177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.825 [2024-07-25 14:54:02.115674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:41.825 [2024-07-25 14:54:02.115695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.825 [2024-07-25 14:54:02.115703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.129911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.129932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.129940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.144714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.144735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.144742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.158979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.158999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.159007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.173546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.173567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.173574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.187818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.187838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.187846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.202068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.202104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.202112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.216233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.216253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.216261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.230680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.230711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.245208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.245227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.245235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.259644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.259664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.259672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.274333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.274352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.274359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.288943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.288963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.288971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.303681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.303701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.085 [2024-07-25 14:54:02.303709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.085 [2024-07-25 14:54:02.317989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.085 [2024-07-25 14:54:02.318010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.086 [2024-07-25 14:54:02.318017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.086 [2024-07-25 14:54:02.332374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.086 [2024-07-25 14:54:02.332395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.086 [2024-07-25 14:54:02.332403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.086 [2024-07-25 14:54:02.346844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.086 [2024-07-25 14:54:02.346865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.086 [2024-07-25 14:54:02.346872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.086 [2024-07-25 14:54:02.361188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.086 [2024-07-25 14:54:02.361208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.086 [2024-07-25 14:54:02.361216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.384393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.384414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.384422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.399725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.399745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.399753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.414493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.414513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.414521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.435730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.435749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.435756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.453977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.453997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.454005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.476875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.476896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.476904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.494942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.494963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.346 [2024-07-25 14:54:02.494970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.346 [2024-07-25 14:54:02.508994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.346 [2024-07-25 14:54:02.509014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.523430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.523457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.537562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.537581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.537589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.552055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.552090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.552098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.566352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.566371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.566379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.580743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.580762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.580769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.595199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.595226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.609446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.609465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.623738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.623757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.623765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.347 [2024-07-25 14:54:02.637984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.347 [2024-07-25 14:54:02.638003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.347 [2024-07-25 14:54:02.638011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.652594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.652614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.667003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.667022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.667030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.681041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.681065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.681089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.695097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.695116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.695124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.709599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.709618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.723740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.723760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.723768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.607 [2024-07-25 14:54:02.738410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.607 [2024-07-25 14:54:02.738430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.607 [2024-07-25 14:54:02.738437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.752805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.752825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.752836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.767314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.767334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.767341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.781796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.781815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.781823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.796339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.796358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.796365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.810873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.810892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.810901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.825420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.825439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.825446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.839677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.839703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.854573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.854593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.854600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.870768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.870789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.608 [2024-07-25 14:54:02.887359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.608 [2024-07-25 14:54:02.887387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.608 [2024-07-25 14:54:02.887395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.902503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.902523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.902531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.917844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.917863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.932129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.932149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.932156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.946788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.946815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.961194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.961214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.961221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.975948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.975967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:02.990461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:02.990480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:02.990487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.868 [2024-07-25 14:54:03.004893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2013e30) 00:26:42.868 [2024-07-25 14:54:03.004912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.868 [2024-07-25 14:54:03.004920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.868 00:26:42.868 Latency(us) 00:26:42.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:42.868 nvme0n1 : 2.01 2038.93 254.87 0.00 0.00 7841.10 6924.02 24618.74 00:26:42.868 =================================================================================================================== 00:26:42.868 Total : 2038.93 254.87 0.00 0.00 7841.10 6924.02 24618.74 00:26:42.868 0 00:26:42.868 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.868 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.868 | .driver_specific 00:26:42.868 | .nvme_error 00:26:42.868 | .status_code 00:26:42.868 | .command_transient_transport_error' 00:26:42.868 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.868 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2473626 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2473626 ']' 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2473626 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2473626 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2473626' 00:26:43.128 killing process with pid 2473626 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2473626 00:26:43.128 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.128 00:26:43.128 Latency(us) 00:26:43.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.128 =================================================================================================================== 00:26:43.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.128 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2473626 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2474258 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2474258 /var/tmp/bperf.sock 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2474258 ']' 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.387 14:54:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.387 [2024-07-25 14:54:03.481887] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:43.387 [2024-07-25 14:54:03.481938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474258 ] 00:26:43.387 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.387 [2024-07-25 14:54:03.537177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.387 [2024-07-25 14:54:03.613570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.324 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.584 nvme0n1 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:44.584 14:54:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.584 Running I/O for 2 seconds... 00:26:44.844 [2024-07-25 14:54:04.888847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fef90 00:26:44.844 [2024-07-25 14:54:04.890108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.890140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.899769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.900030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.900060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.909949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.910254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.919983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.920245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.920264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.929862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.930157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.939867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.940127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.940145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.949706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.949960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.949979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.959528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.959788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.969437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.969694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.969713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.979251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.979508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.979527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.988989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.989253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.989272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:04.998844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:04.999116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:04.999134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.008615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.008870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.008888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.018422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.028217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.028474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.028492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.037986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.038287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.047808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.048065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.048083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.057567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.057833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.057851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.067364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.067618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.067639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.077234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.077517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.086991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.087264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.087283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.096767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.097038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.106629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.106888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.106906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.844 [2024-07-25 14:54:05.116460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.844 [2024-07-25 14:54:05.116719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.844 [2024-07-25 14:54:05.116737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.845 [2024-07-25 14:54:05.126327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:44.845 [2024-07-25 14:54:05.126583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.845 [2024-07-25 14:54:05.126602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.136137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.136414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.145945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.146210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.146229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.155794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.156057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.165577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.165837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.165855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.175424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.175698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.175717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.185278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.185540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.185557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.194962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.195228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.195247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.204826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.205079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.205099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.214563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.214834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.214852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.224383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.224641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.224659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.234141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.234393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.234411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.243932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.244193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.244212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.253693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.253956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.253974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.263453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.263711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.273236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.273490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.273508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.283151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.283405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.283423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.104 [2024-07-25 14:54:05.293093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.104 [2024-07-25 14:54:05.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.104 [2024-07-25 14:54:05.293368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.302850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.303131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.303150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.312633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.312892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.312910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.322424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.322679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.322700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.332199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.332452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.332471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.342021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.342284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.351857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.352127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.352145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.361663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.361938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.361956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.371444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.371716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.381297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.381556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.381574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.105 [2024-07-25 14:54:05.391151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.105 [2024-07-25 14:54:05.391410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.105 [2024-07-25 14:54:05.391429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.400919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.401185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.401203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.410750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.411006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.420567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.420824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.420842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.430338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.430594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.430612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.440146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.440402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.440420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.449829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.450099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.459533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.459806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.459823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.469320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.469592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.469610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.478996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.479258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.479276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.488779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.489053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.489071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.498586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.498846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.498865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.365 [2024-07-25 14:54:05.508307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.365 [2024-07-25 14:54:05.508577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.365 [2024-07-25 14:54:05.508595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.518080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.518339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.518358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.527828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.528098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.528116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.537627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.537884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.537902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.547453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.547731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.557186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.557459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.566917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.567180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.567198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.576709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.576975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.576993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.586445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.586700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.586718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.596244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.596532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.605989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.606265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.606283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.615745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.616032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.625557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.625812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.625831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.635238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.635513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.644940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.645234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.366 [2024-07-25 14:54:05.654751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.366 [2024-07-25 14:54:05.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.366 [2024-07-25 14:54:05.655024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.664562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.664818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.664839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.674361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.674616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.674633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.684103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.684375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.684393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.693882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.694137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.694156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.703720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.703992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.713417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.713693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.713711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.723140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.723395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.723412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.732965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.733270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.742794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.743056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.752652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.752912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.752930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.762531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fc128 00:26:45.627 [2024-07-25 14:54:05.763946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.763964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.779614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190feb58 00:26:45.627 [2024-07-25 14:54:05.781288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.781306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.790765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fe2e8 00:26:45.627 [2024-07-25 14:54:05.791449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.791467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.800641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fe2e8 00:26:45.627 [2024-07-25 14:54:05.801338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.801357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.810409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fe2e8 00:26:45.627 [2024-07-25 14:54:05.810610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.810629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.820241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fe2e8 00:26:45.627 [2024-07-25 14:54:05.820598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.820616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.833573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fdeb0 00:26:45.627 [2024-07-25 14:54:05.834983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.835001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.844208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fa7d8 00:26:45.627 [2024-07-25 14:54:05.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.845060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.853488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f9f68 00:26:45.627 [2024-07-25 14:54:05.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.627 [2024-07-25 14:54:05.854332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.627 [2024-07-25 14:54:05.863096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6738 00:26:45.627 [2024-07-25 14:54:05.863761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.863779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.628 [2024-07-25 14:54:05.872411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5220 00:26:45.628 [2024-07-25 14:54:05.873214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.873234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.628 [2024-07-25 14:54:05.881782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6738 00:26:45.628 [2024-07-25 14:54:05.883083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.883102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.628 [2024-07-25 14:54:05.891127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5220 00:26:45.628 [2024-07-25 14:54:05.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.892036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.628 [2024-07-25 14:54:05.900574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6738 00:26:45.628 [2024-07-25 14:54:05.901481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.901500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.628 [2024-07-25 14:54:05.914056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:45.628 [2024-07-25 14:54:05.915600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.628 [2024-07-25 14:54:05.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.887 [2024-07-25 14:54:05.927746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:45.887 [2024-07-25 14:54:05.928812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.887 [2024-07-25 14:54:05.928832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.887 [2024-07-25 14:54:05.938020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f3e60 00:26:45.887 [2024-07-25 14:54:05.938260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.887 [2024-07-25 14:54:05.938283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.887 [2024-07-25 14:54:05.947873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f3e60 00:26:45.888 [2024-07-25 14:54:05.948102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:05.948121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:05.957530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f3e60 00:26:45.888 [2024-07-25 14:54:05.960007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:05.960025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:05.973514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:05.975129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:05.975147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:05.983946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:05.984401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:05.984419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:05.993782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:05.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:05.994060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.003512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:06.004272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.004290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.013281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:06.013640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.023107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f5378 00:26:45.888 [2024-07-25 14:54:06.023345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.023363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.036846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2510 00:26:45.888 [2024-07-25 14:54:06.037784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.037805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.047879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f46d0 00:26:45.888 [2024-07-25 14:54:06.048315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.048334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.057611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f46d0 00:26:45.888 [2024-07-25 14:54:06.058388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.058406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.067474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f46d0 00:26:45.888 [2024-07-25 14:54:06.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.067727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.077284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f46d0 00:26:45.888 [2024-07-25 14:54:06.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.077889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.089698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f46d0 00:26:45.888 [2024-07-25 14:54:06.091274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.091292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.100767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190ed0b0 00:26:45.888 [2024-07-25 14:54:06.102136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.110164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4de8 00:26:45.888 [2024-07-25 14:54:06.111501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.111519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.118841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e8088 00:26:45.888 [2024-07-25 14:54:06.121084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.121102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.135239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e95a0 00:26:45.888 [2024-07-25 14:54:06.136574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.136593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.145114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:45.888 [2024-07-25 14:54:06.145829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.155000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:45.888 [2024-07-25 14:54:06.155229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.155248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.164880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:45.888 [2024-07-25 14:54:06.165568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.165586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 14:54:06.174736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:45.888 [2024-07-25 14:54:06.174955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.888 [2024-07-25 14:54:06.174973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.184573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:46.148 [2024-07-25 14:54:06.185295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.185314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.194310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f2948 00:26:46.148 [2024-07-25 14:54:06.194782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.194801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.207599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fa3a0 00:26:46.148 [2024-07-25 14:54:06.208885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.208903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.219207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.148 [2024-07-25 14:54:06.220562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.220581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.228590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.148 [2024-07-25 14:54:06.229937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.229955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.238057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.148 [2024-07-25 14:54:06.239430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.239459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.247518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.148 [2024-07-25 14:54:06.248906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.248924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.256872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.148 [2024-07-25 14:54:06.258231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.258248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.266348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.148 [2024-07-25 14:54:06.267711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.267729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.275861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.148 [2024-07-25 14:54:06.277230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.277247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.285305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.148 [2024-07-25 14:54:06.286670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.286688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.294944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.148 [2024-07-25 14:54:06.296232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.296250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 14:54:06.304348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.148 [2024-07-25 14:54:06.305694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.148 [2024-07-25 14:54:06.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.313724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.149 [2024-07-25 14:54:06.315009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.315028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.323150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.149 [2024-07-25 14:54:06.324489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.324507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.332543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.149 [2024-07-25 14:54:06.333900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.333918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.342011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.149 [2024-07-25 14:54:06.343380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.343409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.351447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.149 [2024-07-25 14:54:06.352810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.360943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.149 [2024-07-25 14:54:06.362229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.362247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.370432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.149 [2024-07-25 14:54:06.371766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.371785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.379866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.149 [2024-07-25 14:54:06.381158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.381176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.389209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.149 [2024-07-25 14:54:06.390583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.390600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.398684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.149 [2024-07-25 14:54:06.399984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.400002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.408248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.149 [2024-07-25 14:54:06.409560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.409580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.417757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.149 [2024-07-25 14:54:06.419053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.419072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.427238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.149 [2024-07-25 14:54:06.428515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.428533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.149 [2024-07-25 14:54:06.436639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.149 [2024-07-25 14:54:06.437907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.149 [2024-07-25 14:54:06.437925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.409 [2024-07-25 14:54:06.446102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.409 [2024-07-25 14:54:06.447431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.447449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.455499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.456849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.456869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.464877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.466270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.466288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.474381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.475748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.475766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.483729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.485113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.485131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.493161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.494454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.502542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.503911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.503929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.511957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.513236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.521412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.522760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.530859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.532203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.532222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.540271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.541681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.541698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.549702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.551063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.551084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.559098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.560380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.560398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.568532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.569810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.577981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.579283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.579300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.587413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.588679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.588697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.596853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.598132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.598150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.606287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.607617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.607635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.615616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.616906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.625092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.626471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.626488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.634504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e7c50 00:26:46.410 [2024-07-25 14:54:06.635868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.635886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.643945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6fa8 00:26:46.410 [2024-07-25 14:54:06.645220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.645238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.653315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190fd640 00:26:46.410 [2024-07-25 14:54:06.654744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.654762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.665589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f31b8 00:26:46.410 [2024-07-25 14:54:06.666419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.666437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.677653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190f3a28 00:26:46.410 [2024-07-25 14:54:06.678778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.678796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.689221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e49b0 00:26:46.410 [2024-07-25 14:54:06.690604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.410 [2024-07-25 14:54:06.690632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.410 [2024-07-25 14:54:06.698626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190ea680 00:26:46.410 [2024-07-25 14:54:06.700007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.411 [2024-07-25 14:54:06.700025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.707308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e49b0 00:26:46.671 [2024-07-25 14:54:06.709530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.720405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e6300 00:26:46.671 [2024-07-25 14:54:06.722075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.722093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.732105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5a90 00:26:46.671 [2024-07-25 14:54:06.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.733010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.741918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5a90 00:26:46.671 [2024-07-25 14:54:06.742343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.751729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5a90 00:26:46.671 [2024-07-25 14:54:06.752008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.752026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.762848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e5658 00:26:46.671 [2024-07-25 14:54:06.765671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.765690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.778195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e8088 00:26:46.671 [2024-07-25 14:54:06.779409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.779427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.788013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.788272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.788291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.797751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.798004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.798022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.807505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.807761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.807779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.817196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.817474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.826988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.827239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.827257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.836776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.837015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.837033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.846509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.846750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 [2024-07-25 14:54:06.856331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36270) with pdu=0x2000190e4578 00:26:46.671 [2024-07-25 14:54:06.856574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.671 [2024-07-25 14:54:06.856592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.671 00:26:46.671 Latency(us) 00:26:46.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.671 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.671 nvme0n1 : 2.00 25134.91 98.18 0.00 0.00 5083.54 2763.91 28721.86 00:26:46.671 =================================================================================================================== 00:26:46.671 Total : 25134.91 98.18 0.00 0.00 5083.54 2763.91 28721.86 00:26:46.671 0 00:26:46.671 14:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:46.671 14:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:46.671 14:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:46.671 | .driver_specific 00:26:46.671 | .nvme_error 00:26:46.671 | .status_code 00:26:46.671 | .command_transient_transport_error' 00:26:46.671 14:54:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2474258 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2474258 ']' 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2474258 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2474258 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2474258' 00:26:46.932 killing process with pid 2474258 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2474258 00:26:46.932 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.932 00:26:46.932 Latency(us) 00:26:46.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.932 =================================================================================================================== 00:26:46.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.932 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2474258 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2474933 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2474933 /var/tmp/bperf.sock 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2474933 ']' 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.192 14:54:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.192 [2024-07-25 14:54:07.337285] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:47.192 [2024-07-25 14:54:07.337333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474933 ] 00:26:47.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.192 Zero copy mechanism will not be used. 00:26:47.192 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.192 [2024-07-25 14:54:07.390666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.192 [2024-07-25 14:54:07.469911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.132 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.393 nvme0n1 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:48.393 14:54:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.653 Zero copy mechanism will not be used. 00:26:48.653 Running I/O for 2 seconds... 00:26:48.653 [2024-07-25 14:54:08.756545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.653 [2024-07-25 14:54:08.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.653 [2024-07-25 14:54:08.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.653 [2024-07-25 14:54:08.779659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.653 [2024-07-25 14:54:08.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.653 [2024-07-25 14:54:08.780391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.653 [2024-07-25 14:54:08.811932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.653 [2024-07-25 14:54:08.812790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.653 [2024-07-25 14:54:08.812810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.654 [2024-07-25 14:54:08.836593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.654 [2024-07-25 14:54:08.837264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.654 [2024-07-25 14:54:08.837284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.654 [2024-07-25 14:54:08.859594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.654 [2024-07-25 14:54:08.860030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.654 [2024-07-25 14:54:08.860054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.654 [2024-07-25 14:54:08.880339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.654 [2024-07-25 14:54:08.881006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.654 [2024-07-25 14:54:08.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.654 [2024-07-25 14:54:08.901279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.654 [2024-07-25 14:54:08.901834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.654 [2024-07-25 14:54:08.901853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.654 [2024-07-25 14:54:08.925000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.654 [2024-07-25 14:54:08.925757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.654 [2024-07-25 14:54:08.925776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:08.947949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:08.948609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:08.948627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:08.971330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:08.972105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:08.972125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:08.992712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:08.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:08.993221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.014592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.015111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.015130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.036604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.037267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.037286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.059501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.060352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.060374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.082177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.082842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.103414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.104267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.104285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.127334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.128053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.128073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.150237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.151056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.151077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.170192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.170862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.170881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.914 [2024-07-25 14:54:09.193365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:48.914 [2024-07-25 14:54:09.194076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.914 [2024-07-25 14:54:09.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.215200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.215677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.215695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.237608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.238212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.238231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.259654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.260203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.260222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.282718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.283409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.283427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.305355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.305931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.305950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.327183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.328178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.349069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.349502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.349520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.373435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.374165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.374184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.396578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.397309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.420283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.420857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.420878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.442367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.442859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.442878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.174 [2024-07-25 14:54:09.464287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.174 [2024-07-25 14:54:09.465039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.174 [2024-07-25 14:54:09.465061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.488867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.489532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.489551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.511910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.512605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.512624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.534527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.535284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.535303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.557666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.558602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.558622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.578941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.579531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.579550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.600712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.601373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.601393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.623860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.624661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.624680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.646874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.647645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.647668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.671654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.672363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.697008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.697845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.697864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.434 [2024-07-25 14:54:09.720847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.434 [2024-07-25 14:54:09.721474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.434 [2024-07-25 14:54:09.721493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.744116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.694 [2024-07-25 14:54:09.744602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.694 [2024-07-25 14:54:09.744621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.765873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.694 [2024-07-25 14:54:09.766364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.694 [2024-07-25 14:54:09.766383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.786337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.694 [2024-07-25 14:54:09.787003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.694 [2024-07-25 14:54:09.787021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.808650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.694 [2024-07-25 14:54:09.809411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.694 [2024-07-25 14:54:09.809430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.831397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.694 [2024-07-25 14:54:09.832255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.694 [2024-07-25 14:54:09.832274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.694 [2024-07-25 14:54:09.853682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.854459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.695 [2024-07-25 14:54:09.876192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.876832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.876850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.695 [2024-07-25 14:54:09.897763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.898232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.898251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.695 [2024-07-25 14:54:09.921324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.921765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.695 [2024-07-25 14:54:09.944429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.945443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.695 [2024-07-25 14:54:09.969238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.695 [2024-07-25 14:54:09.969678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.695 [2024-07-25 14:54:09.969696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:09.991116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:09.991779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:09.991797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.013279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.014136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.034888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.035554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.059149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.059788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.059808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.079844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.080738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.080760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.100590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.101410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.101429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.123528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.124053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.124073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.145154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.145805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.145824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.954 [2024-07-25 14:54:10.166657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.954 [2024-07-25 14:54:10.167214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.954 [2024-07-25 14:54:10.167234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.955 [2024-07-25 14:54:10.191219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.955 [2024-07-25 14:54:10.191865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.955 [2024-07-25 14:54:10.191884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.955 [2024-07-25 14:54:10.212791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.955 [2024-07-25 14:54:10.213370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.955 [2024-07-25 14:54:10.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.955 [2024-07-25 14:54:10.234139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:49.955 [2024-07-25 14:54:10.234887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.955 [2024-07-25 14:54:10.234911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.256906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.257838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.257857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.288391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.289284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.289303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.312817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.313393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.313413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.335661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.336324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.336343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.359738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.360597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.360615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.392314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.392976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.392995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.419218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.420330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.420349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.450290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.451040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.451062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.473518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.474451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.474470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.214 [2024-07-25 14:54:10.497277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.214 [2024-07-25 14:54:10.497749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.214 [2024-07-25 14:54:10.497768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.518858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.519349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.539685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.540131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.540150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.563438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.564018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.587115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.587835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.587854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.609125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.609688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.609706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.630173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.630925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.630944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.651482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.652261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.652284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.674829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.675602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.675621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.697218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.697938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.697956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.474 [2024-07-25 14:54:10.718948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e36410) with pdu=0x2000190fef90 00:26:50.474 [2024-07-25 14:54:10.719736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.474 [2024-07-25 14:54:10.719754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.474 00:26:50.474 Latency(us) 00:26:50.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:50.474 nvme0n1 : 2.01 1328.80 166.10 0.00 0.00 12002.83 8833.11 35560.40 00:26:50.474 =================================================================================================================== 00:26:50.474 Total : 1328.80 166.10 0.00 0.00 12002.83 8833.11 35560.40 00:26:50.474 0 00:26:50.474 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:50.476 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:50.476 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:50.476 | .driver_specific 00:26:50.476 | .nvme_error 00:26:50.476 | .status_code 00:26:50.476 | .command_transient_transport_error' 00:26:50.476 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:50.735 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 86 > 0 )) 00:26:50.735 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2474933 00:26:50.735 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2474933 ']' 00:26:50.735 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2474933 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2474933 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2474933' 00:26:50.736 killing process with pid 2474933 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2474933 00:26:50.736 Received shutdown signal, test time was about 2.000000 seconds 00:26:50.736 00:26:50.736 Latency(us) 00:26:50.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.736 =================================================================================================================== 00:26:50.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.736 14:54:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2474933 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2472688 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2472688 ']' 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2472688 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2472688 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2472688' 00:26:50.996 killing process with pid 2472688 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2472688 00:26:50.996 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2472688 00:26:51.256 00:26:51.256 real 0m16.703s 00:26:51.256 user 0m33.069s 00:26:51.256 sys 0m3.395s 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.256 ************************************ 00:26:51.256 END TEST nvmf_digest_error 00:26:51.256 ************************************ 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.256 rmmod nvme_tcp 00:26:51.256 rmmod nvme_fabrics 00:26:51.256 rmmod nvme_keyring 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2472688 ']' 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2472688 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2472688 ']' 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2472688 00:26:51.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2472688) - No such process 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2472688 is not found' 00:26:51.256 Process with pid 2472688 is not found 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.256 14:54:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.858 14:54:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.858 00:26:53.858 real 0m41.640s 00:26:53.858 user 1m8.374s 00:26:53.858 sys 0m11.064s 00:26:53.858 14:54:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.858 14:54:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:53.858 ************************************ 00:26:53.858 END TEST nvmf_digest 00:26:53.858 ************************************ 00:26:53.858 14:54:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.858 14:54:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:53.858 14:54:13 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:53.858 14:54:13 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:53.858 14:54:13 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:53.858 14:54:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.858 14:54:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.858 14:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.858 ************************************ 00:26:53.858 START TEST nvmf_bdevperf 00:26:53.858 ************************************ 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:53.858 * Looking for test storage... 00:26:53.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.858 14:54:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:59.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:59.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.139 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:59.140 Found net devices under 0000:86:00.0: cvl_0_0 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:59.140 Found net devices under 0000:86:00.1: cvl_0_1 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.140 14:54:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:59.140 00:26:59.140 --- 10.0.0.2 ping statistics --- 00:26:59.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.140 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:26:59.140 00:26:59.140 --- 10.0.0.1 ping statistics --- 00:26:59.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.140 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2479331 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2479331 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2479331 ']' 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.140 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 [2024-07-25 14:54:19.087934] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:59.140 [2024-07-25 14:54:19.087983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.140 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.140 [2024-07-25 14:54:19.149250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:59.140 [2024-07-25 14:54:19.224236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.140 [2024-07-25 14:54:19.224278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.140 [2024-07-25 14:54:19.224286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.140 [2024-07-25 14:54:19.224292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.140 [2024-07-25 14:54:19.224297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.140 [2024-07-25 14:54:19.224403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.140 [2024-07-25 14:54:19.224427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.140 [2024-07-25 14:54:19.224428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 [2024-07-25 14:54:19.936859] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 Malloc0 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 14:54:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 [2024-07-25 14:54:19.999810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.970 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.970 { 00:26:59.971 "params": { 00:26:59.971 "name": "Nvme$subsystem", 00:26:59.971 "trtype": "$TEST_TRANSPORT", 00:26:59.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.971 "adrfam": "ipv4", 00:26:59.971 "trsvcid": "$NVMF_PORT", 00:26:59.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.971 "hdgst": ${hdgst:-false}, 00:26:59.971 "ddgst": ${ddgst:-false} 00:26:59.971 }, 00:26:59.971 "method": "bdev_nvme_attach_controller" 00:26:59.971 } 00:26:59.971 EOF 00:26:59.971 )") 00:26:59.971 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:59.971 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:59.971 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:59.971 14:54:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.971 "params": { 00:26:59.971 "name": "Nvme1", 00:26:59.971 "trtype": "tcp", 00:26:59.971 "traddr": "10.0.0.2", 00:26:59.971 "adrfam": "ipv4", 00:26:59.971 "trsvcid": "4420", 00:26:59.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.971 "hdgst": false, 00:26:59.971 "ddgst": false 00:26:59.971 }, 00:26:59.971 "method": "bdev_nvme_attach_controller" 00:26:59.971 }' 00:26:59.971 [2024-07-25 14:54:20.051013] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:26:59.971 [2024-07-25 14:54:20.051069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479580 ] 00:26:59.971 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.971 [2024-07-25 14:54:20.105652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.971 [2024-07-25 14:54:20.179159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.230 Running I/O for 1 seconds... 00:27:01.169 00:27:01.169 Latency(us) 00:27:01.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:01.169 Verification LBA range: start 0x0 length 0x4000 00:27:01.169 Nvme1n1 : 1.00 10393.77 40.60 0.00 0.00 12264.59 2208.28 30773.43 00:27:01.169 =================================================================================================================== 00:27:01.169 Total : 10393.77 40.60 0.00 0.00 12264.59 2208.28 30773.43 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2479813 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.429 { 00:27:01.429 "params": { 00:27:01.429 "name": "Nvme$subsystem", 00:27:01.429 "trtype": "$TEST_TRANSPORT", 00:27:01.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.429 "adrfam": "ipv4", 00:27:01.429 "trsvcid": "$NVMF_PORT", 00:27:01.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.429 "hdgst": ${hdgst:-false}, 00:27:01.429 "ddgst": ${ddgst:-false} 00:27:01.429 }, 00:27:01.429 "method": "bdev_nvme_attach_controller" 00:27:01.429 } 00:27:01.429 EOF 00:27:01.429 )") 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:01.429 14:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:01.429 "params": { 00:27:01.429 "name": "Nvme1", 00:27:01.429 "trtype": "tcp", 00:27:01.429 "traddr": "10.0.0.2", 00:27:01.429 "adrfam": "ipv4", 00:27:01.429 "trsvcid": "4420", 00:27:01.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:01.429 "hdgst": false, 00:27:01.429 "ddgst": false 00:27:01.429 }, 00:27:01.429 "method": "bdev_nvme_attach_controller" 00:27:01.429 }' 00:27:01.429 [2024-07-25 14:54:21.617640] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:27:01.429 [2024-07-25 14:54:21.617686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479813 ] 00:27:01.429 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.429 [2024-07-25 14:54:21.673678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.688 [2024-07-25 14:54:21.746756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.688 Running I/O for 15 seconds... 00:27:04.986 14:54:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2479331 00:27:04.986 14:54:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:04.986 [2024-07-25 14:54:24.588797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.588982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.588993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.986 [2024-07-25 14:54:24.589233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.986 [2024-07-25 14:54:24.589241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.987 [2024-07-25 14:54:24.589717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.987 [2024-07-25 14:54:24.589796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.987 [2024-07-25 14:54:24.589803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.589954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.589969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.589984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.589991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.589998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.988 [2024-07-25 14:54:24.590313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.590327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.590342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.590357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.590371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.988 [2024-07-25 14:54:24.590379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.988 [2024-07-25 14:54:24.590390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.989 [2024-07-25 14:54:24.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.989 [2024-07-25 14:54:24.590774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e930 is same with the state(5) to be set 00:27:04.989 [2024-07-25 14:54:24.590790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.989 [2024-07-25 14:54:24.590795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.989 [2024-07-25 14:54:24.590801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116120 len:8 PRP1 0x0 PRP2 0x0 00:27:04.989 [2024-07-25 14:54:24.590808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-07-25 14:54:24.590849] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x243e930 was disconnected and freed. reset controller. 00:27:04.989 [2024-07-25 14:54:24.593828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.989 [2024-07-25 14:54:24.593884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.989 [2024-07-25 14:54:24.594648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-07-25 14:54:24.594664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.989 [2024-07-25 14:54:24.594671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.989 [2024-07-25 14:54:24.594851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.989 [2024-07-25 14:54:24.595030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.989 [2024-07-25 14:54:24.595038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.989 [2024-07-25 14:54:24.595051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.989 [2024-07-25 14:54:24.597891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.989 [2024-07-25 14:54:24.607111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.989 [2024-07-25 14:54:24.607854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-07-25 14:54:24.607898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.989 [2024-07-25 14:54:24.607921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.989 [2024-07-25 14:54:24.608513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.989 [2024-07-25 14:54:24.608733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.989 [2024-07-25 14:54:24.608740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.989 [2024-07-25 14:54:24.608747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.989 [2024-07-25 14:54:24.611558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.989 [2024-07-25 14:54:24.620038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.989 [2024-07-25 14:54:24.620794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-07-25 14:54:24.620836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.989 [2024-07-25 14:54:24.620858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.621299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.621472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.621480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.621486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.624183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.632918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.633661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.633705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.633726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.634321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.634740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.634748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.634754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.637441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.645945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.646687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.646730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.646751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.647260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.647433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.647441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.647447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.650189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.658826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.659459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.659502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.659523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.660117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.660693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.660701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.660707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.663464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.671656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.672081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.672097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.672103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.672266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.672428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.672435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.672441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.675140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.684531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.685236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.685290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.685311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.685815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.685978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.685985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.685991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.688695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.697394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.698033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.698087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.698109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.698518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.698691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.698699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.698704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.701392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.710362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.711094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.711137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.711158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.711475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.711648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.711656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.711663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.714369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.990 [2024-07-25 14:54:24.723308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.990 [2024-07-25 14:54:24.724031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-07-25 14:54:24.724086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.990 [2024-07-25 14:54:24.724108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.990 [2024-07-25 14:54:24.724515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.990 [2024-07-25 14:54:24.724687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.990 [2024-07-25 14:54:24.724694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.990 [2024-07-25 14:54:24.724700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.990 [2024-07-25 14:54:24.727405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.736184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.736886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.736934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.736955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.737503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.737677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.737685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.737691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.740419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.749052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.749755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.749773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.749779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.749942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.750128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.750137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.750142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.752826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.761907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.762659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.762701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.762722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.763057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.763230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.763238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.763244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.765929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.774816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.775231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.775247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.775253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.775416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.775578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.775585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.775591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.778292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.787837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.788550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.788593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.788614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.788974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.789164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.789172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.789178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.791864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.800711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.801423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.801466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.801488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.801751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.801924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.801931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.801937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.804627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.813633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.814499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.814545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.814565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.815001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.815180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.815189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.815195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.817918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.826663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.827327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.827371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.827393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.827972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.828499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.828507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.828513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.831198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.839850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.991 [2024-07-25 14:54:24.840538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-07-25 14:54:24.840554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.991 [2024-07-25 14:54:24.840561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.991 [2024-07-25 14:54:24.840737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.991 [2024-07-25 14:54:24.840915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.991 [2024-07-25 14:54:24.840923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.991 [2024-07-25 14:54:24.840929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.991 [2024-07-25 14:54:24.843773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.991 [2024-07-25 14:54:24.852979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.853676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.853693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.853699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.853876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.854059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.854068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.854074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.856908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.866048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.866706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.866748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.866770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.867200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.867374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.867382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.867389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.870145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.879165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.879834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.879844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.880022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.880205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.880213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.880220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.883054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.892185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.892917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.892959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.892981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.893578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.894170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.894178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.894184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.896911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.905149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.905799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.905842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.905863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.906358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.906537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.906545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.906551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.909269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.918134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.918747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.918788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.918809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.919207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.919380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.919391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.919397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.922114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.931086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.931662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.931678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.931685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.931856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.932028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.932036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.932047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.934801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.944133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.944647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.944688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.944710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.945225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.945398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.945406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.945412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.948174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.956990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.957585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.957601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.957608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.957779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.957953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.957961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.957966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.960710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.969961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.992 [2024-07-25 14:54:24.970616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-07-25 14:54:24.970659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.992 [2024-07-25 14:54:24.970680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.992 [2024-07-25 14:54:24.971275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.992 [2024-07-25 14:54:24.971784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.992 [2024-07-25 14:54:24.971792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.992 [2024-07-25 14:54:24.971798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.992 [2024-07-25 14:54:24.974532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.992 [2024-07-25 14:54:24.982929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:24.983580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:24.983623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:24.983644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:24.984156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:24.984330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:24.984338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:24.984343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:24.987109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:24.995946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:24.996673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:24.996716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:24.996737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:24.997253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:24.997426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:24.997434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:24.997440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.000149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.008876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.009519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.009536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.009543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.009718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.009890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.009897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.009903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.012601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.021810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.022528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.022571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.022592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.023193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.023562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.023570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.023576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.026320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.034702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.035348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.035392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.035413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.035776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.035949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.035956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.035962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.038656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.047725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.048460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.048503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.048523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.049113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.049561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.049570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.049579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.053646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.061585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.062296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.062312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.062319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.062496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.062679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.062687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.062693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.065454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.074607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.075263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.075306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.075327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.075689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.075853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.075860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.075866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.078620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.087638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.088371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.088415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.088436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.088890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.089069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.089077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.089082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.091810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.100844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.101500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-07-25 14:54:25.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.993 [2024-07-25 14:54:25.101564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.993 [2024-07-25 14:54:25.101841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.993 [2024-07-25 14:54:25.102019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.993 [2024-07-25 14:54:25.102027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.993 [2024-07-25 14:54:25.102034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.993 [2024-07-25 14:54:25.104867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.993 [2024-07-25 14:54:25.113837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.993 [2024-07-25 14:54:25.114480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.114545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.114866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.115038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.115052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.115058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.117872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.126733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.127452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.127495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.127517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.128107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.128691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.128715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.128735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.131514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.139582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.140257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.140300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.140321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.140671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.140847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.140854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.140860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.143547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.152449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.153164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.153206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.153227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.153565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.153737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.153745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.153751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.156441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.165326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.166034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.166088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.166109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.166688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.167011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.167019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.167025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.169714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.178237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.178849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.178891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.178913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.179516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.179690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.179697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.179704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.182430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.191121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.191768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.191811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.191831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.192422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.192938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.192946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.192952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.195689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.203980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.204631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.204674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.204695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.205076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.205249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.205257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.205263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.208010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.216944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.217662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.217705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.217726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.218069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.218241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.218249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.218255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.220937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.229836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.230461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.230503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.994 [2024-07-25 14:54:25.230531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.994 [2024-07-25 14:54:25.230997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.994 [2024-07-25 14:54:25.231173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.994 [2024-07-25 14:54:25.231181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.994 [2024-07-25 14:54:25.231187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.994 [2024-07-25 14:54:25.233931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.994 [2024-07-25 14:54:25.242793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.994 [2024-07-25 14:54:25.243508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-07-25 14:54:25.243550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.995 [2024-07-25 14:54:25.243571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.995 [2024-07-25 14:54:25.244082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.995 [2024-07-25 14:54:25.244255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.995 [2024-07-25 14:54:25.244263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.995 [2024-07-25 14:54:25.244269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.995 [2024-07-25 14:54:25.247032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.995 [2024-07-25 14:54:25.255745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.995 [2024-07-25 14:54:25.256444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-07-25 14:54:25.256477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.995 [2024-07-25 14:54:25.256498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.995 [2024-07-25 14:54:25.257048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.995 [2024-07-25 14:54:25.257236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.995 [2024-07-25 14:54:25.257244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.995 [2024-07-25 14:54:25.257250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.995 [2024-07-25 14:54:25.259929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.995 [2024-07-25 14:54:25.268725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.995 [2024-07-25 14:54:25.269424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-07-25 14:54:25.269465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:04.995 [2024-07-25 14:54:25.269486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:04.995 [2024-07-25 14:54:25.269925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:04.995 [2024-07-25 14:54:25.270120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.995 [2024-07-25 14:54:25.270132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.995 [2024-07-25 14:54:25.270138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.995 [2024-07-25 14:54:25.272926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.281682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.282384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.282426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.282447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.282998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.283190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.283199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.283205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.285885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.294582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.295302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.295347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.295370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.295950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.296364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.296372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.296378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.299061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.307491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.308205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.308248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.308269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.308631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.308794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.308801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.308807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.311506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.320293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.320934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.320976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.320998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.321336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.321508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.321516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.321522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.324214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.333090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.333779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.333821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.333842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.334264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.334436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.334444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.334451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.337134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.346075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.346761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-07-25 14:54:25.346777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.255 [2024-07-25 14:54:25.346783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.255 [2024-07-25 14:54:25.346960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.255 [2024-07-25 14:54:25.347143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.255 [2024-07-25 14:54:25.347151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.255 [2024-07-25 14:54:25.347157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.255 [2024-07-25 14:54:25.349989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.255 [2024-07-25 14:54:25.359129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.255 [2024-07-25 14:54:25.359763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.359805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.359833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.360392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.360565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.360573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.360579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.363348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.372143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.372890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.372933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.372954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.373477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.373655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.373663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.373670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.376449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.385048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.385747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.385788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.385809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.386228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.386401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.386409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.386415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.389071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.397843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.398527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.398570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.398591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.399182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.399512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.399524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.399530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.402213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.411054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.411771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.411814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.411835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.412431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.412661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.412668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.412674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.415357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.423915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.424623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.424687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.425239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.425412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.425420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.425426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.428109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.436834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.437554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.437597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.437618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.438212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.438538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.438546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.438552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.441235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.449740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.450452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.450493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.450514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.450977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.451155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.451163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.451169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.453847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.462784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.463492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.463536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.463560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.464150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.464595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.464607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.464616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.468685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.476447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.256 [2024-07-25 14:54:25.477155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-07-25 14:54:25.477198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.256 [2024-07-25 14:54:25.477219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.256 [2024-07-25 14:54:25.477579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.256 [2024-07-25 14:54:25.477752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.256 [2024-07-25 14:54:25.477760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.256 [2024-07-25 14:54:25.477766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.256 [2024-07-25 14:54:25.480518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.256 [2024-07-25 14:54:25.489334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.257 [2024-07-25 14:54:25.490067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-07-25 14:54:25.490084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.257 [2024-07-25 14:54:25.490091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.257 [2024-07-25 14:54:25.490274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.257 [2024-07-25 14:54:25.490437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.257 [2024-07-25 14:54:25.490445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.257 [2024-07-25 14:54:25.490450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.257 [2024-07-25 14:54:25.493197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.257 [2024-07-25 14:54:25.502242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.257 [2024-07-25 14:54:25.502977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-07-25 14:54:25.503020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.257 [2024-07-25 14:54:25.503054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.257 [2024-07-25 14:54:25.503405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.257 [2024-07-25 14:54:25.503578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.257 [2024-07-25 14:54:25.503586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.257 [2024-07-25 14:54:25.503593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.257 [2024-07-25 14:54:25.506301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.257 [2024-07-25 14:54:25.515234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.257 [2024-07-25 14:54:25.515917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-07-25 14:54:25.515959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.257 [2024-07-25 14:54:25.515980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.257 [2024-07-25 14:54:25.516451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.257 [2024-07-25 14:54:25.516624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.257 [2024-07-25 14:54:25.516632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.257 [2024-07-25 14:54:25.516638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.257 [2024-07-25 14:54:25.519392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.257 [2024-07-25 14:54:25.528025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.257 [2024-07-25 14:54:25.528737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-07-25 14:54:25.528780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.257 [2024-07-25 14:54:25.528801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.257 [2024-07-25 14:54:25.529396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.257 [2024-07-25 14:54:25.529884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.257 [2024-07-25 14:54:25.529892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.257 [2024-07-25 14:54:25.529901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.257 [2024-07-25 14:54:25.532584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.257 [2024-07-25 14:54:25.540974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.257 [2024-07-25 14:54:25.541683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-07-25 14:54:25.541725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.257 [2024-07-25 14:54:25.541746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.257 [2024-07-25 14:54:25.542342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.257 [2024-07-25 14:54:25.542842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.257 [2024-07-25 14:54:25.542851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.257 [2024-07-25 14:54:25.542857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.257 [2024-07-25 14:54:25.545629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.517 [2024-07-25 14:54:25.553964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.517 [2024-07-25 14:54:25.554682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.517 [2024-07-25 14:54:25.554725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.517 [2024-07-25 14:54:25.554746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.517 [2024-07-25 14:54:25.555242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.517 [2024-07-25 14:54:25.555414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.517 [2024-07-25 14:54:25.555422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.517 [2024-07-25 14:54:25.555428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.517 [2024-07-25 14:54:25.559360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.517 [2024-07-25 14:54:25.567570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.517 [2024-07-25 14:54:25.568284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.517 [2024-07-25 14:54:25.568326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.517 [2024-07-25 14:54:25.568347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.517 [2024-07-25 14:54:25.568838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.517 [2024-07-25 14:54:25.569011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.517 [2024-07-25 14:54:25.569019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.517 [2024-07-25 14:54:25.569025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.517 [2024-07-25 14:54:25.571744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.517 [2024-07-25 14:54:25.580389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.517 [2024-07-25 14:54:25.581074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.517 [2024-07-25 14:54:25.581124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.517 [2024-07-25 14:54:25.581146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.517 [2024-07-25 14:54:25.581725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.517 [2024-07-25 14:54:25.582263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.517 [2024-07-25 14:54:25.582271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.582277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.585025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.593291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.594008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.594062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.594085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.594664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.595079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.595087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.595093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.597840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.606419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.607122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.607166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.607188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.607768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.608138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.608157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.608163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.610990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.619316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.619975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.620017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.620038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.620633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.621028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.621036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.621046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.623732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.632121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.632817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.632860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.632881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.633476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.633992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.634000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.634006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.636798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.645048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.645765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.645807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.645828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.646201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.646455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.646466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.646475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.650542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.658607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.659319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.659383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.659963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.660301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.660309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.660314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.663067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.671428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.672140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.672183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.672204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.672783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.672977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.672985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.672991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.675685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.684271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.684982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.685024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.685059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.685391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.685564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.685571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.685578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.688265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.697163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.697875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.697890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.697897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.698075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.698248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.698256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.698262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.518 [2024-07-25 14:54:25.700948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.518 [2024-07-25 14:54:25.710038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.518 [2024-07-25 14:54:25.710745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.518 [2024-07-25 14:54:25.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.518 [2024-07-25 14:54:25.710815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.518 [2024-07-25 14:54:25.711261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.518 [2024-07-25 14:54:25.711434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.518 [2024-07-25 14:54:25.711442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.518 [2024-07-25 14:54:25.711447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.714131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.722851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.723506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.723549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.723571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.723814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.723987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.723995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.724001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.726692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.735760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.736418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.736460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.736481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.737000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.737179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.737187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.737193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.739873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.748706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.749393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.749436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.749457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.750035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.750250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.750264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.750270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.752950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.761538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.762258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.762302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.762323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.762907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.763415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.763423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.763430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.766148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.774411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.775077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.775120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.775141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.775385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.775557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.775564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.775570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.778275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.787299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.787964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.788006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.788027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.788453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.788705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.788716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.788725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.792791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.519 [2024-07-25 14:54:25.800676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.519 [2024-07-25 14:54:25.801398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.519 [2024-07-25 14:54:25.801441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.519 [2024-07-25 14:54:25.801462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.519 [2024-07-25 14:54:25.801687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.519 [2024-07-25 14:54:25.801860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.519 [2024-07-25 14:54:25.801868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.519 [2024-07-25 14:54:25.801874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.519 [2024-07-25 14:54:25.804646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.780 [2024-07-25 14:54:25.813670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.780 [2024-07-25 14:54:25.814385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.780 [2024-07-25 14:54:25.814429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.780 [2024-07-25 14:54:25.814451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.780 [2024-07-25 14:54:25.814702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.780 [2024-07-25 14:54:25.814875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.780 [2024-07-25 14:54:25.814882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.814889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.817676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.826459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.827147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.827188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.827209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.827531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.827694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.827701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.827707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.830403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.839331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.840013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.840066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.840089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.840472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.840635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.840643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.840648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.843337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.852268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.852966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.852981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.852988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.853184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.853361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.853369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.853376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.856220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.865381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.866098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.866141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.866162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.866392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.866564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.866572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.866578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.869327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.878277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.878967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.879010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.879031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.879519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.879692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.879700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.879709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.882438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.891163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.891847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.891889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.891910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.892382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.892555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.892563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.892569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.895256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.903974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.904632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.904648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.904654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.904826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.905001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.905008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.905014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.907701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.916877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.917576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.917593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.917600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.917777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.917957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.917965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.917971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.920794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.929708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.930424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.930467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.930488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.930764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.930949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.930957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.781 [2024-07-25 14:54:25.930964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.781 [2024-07-25 14:54:25.933780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.781 [2024-07-25 14:54:25.942724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.781 [2024-07-25 14:54:25.943464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-07-25 14:54:25.943507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-07-25 14:54:25.943529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.781 [2024-07-25 14:54:25.943785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.781 [2024-07-25 14:54:25.943964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.781 [2024-07-25 14:54:25.943972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:25.943978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:25.946754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:25.955711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:25.956433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:25.956476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:25.956496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:25.956659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:25.956823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:25.956830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:25.956836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:25.959570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:25.968611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:25.969346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:25.969389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:25.969410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:25.969802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:25.969976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:25.969984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:25.969991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:25.973917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:25.982203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:25.982936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:25.982977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:25.982999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:25.983572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:25.983745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:25.983753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:25.983758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:25.986510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:25.995133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:25.995703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:25.995719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:25.995726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:25.995898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:25.996081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:25.996089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:25.996095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:25.998840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:26.008029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:26.008745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:26.008786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:26.008807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:26.009099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:26.009272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:26.009280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:26.009289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:26.012009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:26.021057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:26.021761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:26.021804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:26.021824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:26.022416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:26.022976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:26.022984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:26.022990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:26.025693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:26.033888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:26.034608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:26.034651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:26.034672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:26.035080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:26.035253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:26.035261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:26.035267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:26.037952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:26.046700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:26.047341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:26.047357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:26.047363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:26.047545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:26.047709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:26.047717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:26.047722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:26.050424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.782 [2024-07-25 14:54:26.059522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.782 [2024-07-25 14:54:26.060161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-07-25 14:54:26.060211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-07-25 14:54:26.060232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:05.782 [2024-07-25 14:54:26.060811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:05.782 [2024-07-25 14:54:26.061103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.782 [2024-07-25 14:54:26.061115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.782 [2024-07-25 14:54:26.061124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.782 [2024-07-25 14:54:26.065196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.073014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.073738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.073754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.073761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.073938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.074123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.074131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.074137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.076914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.085868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.086585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.086627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.086648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.087086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.087259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.087267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.087273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.089980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.098723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.099412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.099474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.099938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.100127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.100136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.100142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.102823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.111877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.112640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.112683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.112704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.112979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.113156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.113165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.113171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.115989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.124985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.125636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.125652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.125659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.125831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.126003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.126011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.126017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.128835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.137805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.138513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.138555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.138576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.139173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.139480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.139488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.139494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.142184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.150704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.043 [2024-07-25 14:54:26.151436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-07-25 14:54:26.151479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-07-25 14:54:26.151501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.043 [2024-07-25 14:54:26.152088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.043 [2024-07-25 14:54:26.152343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.043 [2024-07-25 14:54:26.152354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.043 [2024-07-25 14:54:26.152362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.043 [2024-07-25 14:54:26.156430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.043 [2024-07-25 14:54:26.164106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.164807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.164848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.164870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.165331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.165504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.165512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.165518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.168285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.176895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.177623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.177666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.177687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.178175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.178349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.178356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.178362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.181047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.189783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.190491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.190533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.190561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.191157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.191502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.191509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.191515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.194200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.202682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.203373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.203415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.203437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.204014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.204398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.204406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.204412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.207098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.215565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.216199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.216254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.216276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.216857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.217436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.217445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.217451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.220229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.228437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.229187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.229230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.229251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.229614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.229787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.229798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.229805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.232531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.241478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.242162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.242204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.242226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.242807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.243130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.243144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.243153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.247228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.254789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.255488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.255533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.255555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.256149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.256592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.256600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.256606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.259341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.267762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.268494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.268538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.268559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.269018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.269210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.269218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.269224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.271913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.281001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.044 [2024-07-25 14:54:26.281638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.044 [2024-07-25 14:54:26.281680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.044 [2024-07-25 14:54:26.281701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.044 [2024-07-25 14:54:26.282295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.044 [2024-07-25 14:54:26.282730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.044 [2024-07-25 14:54:26.282738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.044 [2024-07-25 14:54:26.282743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.044 [2024-07-25 14:54:26.285465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.044 [2024-07-25 14:54:26.293837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.045 [2024-07-25 14:54:26.294410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.045 [2024-07-25 14:54:26.294427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.045 [2024-07-25 14:54:26.294434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.045 [2024-07-25 14:54:26.294606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.045 [2024-07-25 14:54:26.294778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.045 [2024-07-25 14:54:26.294786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.045 [2024-07-25 14:54:26.294792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.045 [2024-07-25 14:54:26.297482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.045 [2024-07-25 14:54:26.306823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.045 [2024-07-25 14:54:26.307384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.045 [2024-07-25 14:54:26.307402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.045 [2024-07-25 14:54:26.307409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.045 [2024-07-25 14:54:26.307581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.045 [2024-07-25 14:54:26.307753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.045 [2024-07-25 14:54:26.307761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.045 [2024-07-25 14:54:26.307767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.045 [2024-07-25 14:54:26.310460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.045 [2024-07-25 14:54:26.319792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.045 [2024-07-25 14:54:26.320418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.045 [2024-07-25 14:54:26.320473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.045 [2024-07-25 14:54:26.320496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.045 [2024-07-25 14:54:26.320931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.045 [2024-07-25 14:54:26.321107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.045 [2024-07-25 14:54:26.321116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.045 [2024-07-25 14:54:26.321122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.045 [2024-07-25 14:54:26.323874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.045 [2024-07-25 14:54:26.332748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.045 [2024-07-25 14:54:26.333333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.045 [2024-07-25 14:54:26.333378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.045 [2024-07-25 14:54:26.333400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.045 [2024-07-25 14:54:26.333814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.045 [2024-07-25 14:54:26.334076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.045 [2024-07-25 14:54:26.334087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.045 [2024-07-25 14:54:26.334096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.305 [2024-07-25 14:54:26.338170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.305 [2024-07-25 14:54:26.346148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.305 [2024-07-25 14:54:26.346648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.305 [2024-07-25 14:54:26.346690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.305 [2024-07-25 14:54:26.346711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.305 [2024-07-25 14:54:26.347301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.305 [2024-07-25 14:54:26.347744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.305 [2024-07-25 14:54:26.347753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.305 [2024-07-25 14:54:26.347759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.305 [2024-07-25 14:54:26.350517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.305 [2024-07-25 14:54:26.359245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.305 [2024-07-25 14:54:26.359878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.305 [2024-07-25 14:54:26.359920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.305 [2024-07-25 14:54:26.359941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.305 [2024-07-25 14:54:26.360535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.305 [2024-07-25 14:54:26.360942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.305 [2024-07-25 14:54:26.360950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.305 [2024-07-25 14:54:26.360960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.305 [2024-07-25 14:54:26.363802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.305 [2024-07-25 14:54:26.372352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.305 [2024-07-25 14:54:26.373017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.305 [2024-07-25 14:54:26.373069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.305 [2024-07-25 14:54:26.373091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.305 [2024-07-25 14:54:26.373676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.305 [2024-07-25 14:54:26.373854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.305 [2024-07-25 14:54:26.373862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.305 [2024-07-25 14:54:26.373868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.305 [2024-07-25 14:54:26.376707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.305 [2024-07-25 14:54:26.385350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.305 [2024-07-25 14:54:26.386055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.305 [2024-07-25 14:54:26.386098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.305 [2024-07-25 14:54:26.386120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.305 [2024-07-25 14:54:26.386582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.386755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.386763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.386768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.389493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.398363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.398931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.398972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.398993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.399588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.399969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.399978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.399984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.402721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.411317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.412008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.412060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.412083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.412662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.412995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.413003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.413008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.415696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.424309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.425071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.425115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.425136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.425717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.426024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.426032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.426038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.428729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.437394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.438069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.438112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.438133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.438487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.438660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.438668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.438674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.441452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.450410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.451033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.451090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.451113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.451699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.452089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.452097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.452103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.454846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.463411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.464059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.464103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.464124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.464701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.464891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.464899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.464905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.468722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.477081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.477777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.477819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.477840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.478135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.478309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.478319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.478325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.481162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.490217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.490572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.490589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.490595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.490773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.490951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.490959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.490969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.493808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.503355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.503985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.504028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.504063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.504643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.505143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.505152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.306 [2024-07-25 14:54:26.505158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.306 [2024-07-25 14:54:26.507992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.306 [2024-07-25 14:54:26.516540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.306 [2024-07-25 14:54:26.517117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.306 [2024-07-25 14:54:26.517133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.306 [2024-07-25 14:54:26.517140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.306 [2024-07-25 14:54:26.517317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.306 [2024-07-25 14:54:26.517494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.306 [2024-07-25 14:54:26.517502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.517508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.520350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.529490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.530084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.530101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.530107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.530286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.530450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.530457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.530463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.533163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.542464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.543160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.543211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.543233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.543813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.544138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.544147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.544153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.546897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.555303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.556025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.556100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.556358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.556576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.556587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.556595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.560666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.568900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.569543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.569585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.569607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.570201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.570545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.570553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.570559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.573309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.581949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.582596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.582639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.582660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.583071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.583248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.583256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.583262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.307 [2024-07-25 14:54:26.585946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.307 [2024-07-25 14:54:26.595017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.307 [2024-07-25 14:54:26.595611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.307 [2024-07-25 14:54:26.595627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.307 [2024-07-25 14:54:26.595634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.307 [2024-07-25 14:54:26.595806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.307 [2024-07-25 14:54:26.595980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.307 [2024-07-25 14:54:26.595988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.307 [2024-07-25 14:54:26.595994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.598798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.607987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.608635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.608676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.608697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.609183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.609356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.609364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.609370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.612216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.621041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.621697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.621738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.621759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.622351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.622635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.622643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.622649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.625415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.634026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.634694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.634736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.634758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.635176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.635348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.635356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.635362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.638164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.646923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.647570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.647612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.647633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.648223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.648410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.648418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.648424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.651165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.659831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.660536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.660552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.660559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.660731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.660902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.660910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.660916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.663606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.672662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.673369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.673412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.673440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.673760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.673933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.673941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.673947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.568 [2024-07-25 14:54:26.676719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.568 [2024-07-25 14:54:26.685482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.568 [2024-07-25 14:54:26.686212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.568 [2024-07-25 14:54:26.686254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.568 [2024-07-25 14:54:26.686276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.568 [2024-07-25 14:54:26.686627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.568 [2024-07-25 14:54:26.686790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.568 [2024-07-25 14:54:26.686797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.568 [2024-07-25 14:54:26.686803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.689500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.698276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.699004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.699072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.699094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.699618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.699791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.699799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.699804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.702501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.711131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.711871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.711913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.711934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.712359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.712532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.712543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.712549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.715236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.723962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.724702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.724743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.724765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.725358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.725944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.725967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.725996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.728682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.736823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.737545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.737588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.737610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.738205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.738459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.738469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.738478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.742546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.750266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.751014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.751068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.751090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.751497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.751670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.751678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.751684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.754417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.763191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.763920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.763964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.763985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.764454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.764647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.764655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.764661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.767457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.776129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.776842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.776885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.776905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.777500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.777832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.777840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.777846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.780529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.788927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.789637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.789653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.789660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.789832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.790005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.790013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.790019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.792707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.801868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.802505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.802547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.802568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.802988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.803165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.803173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.803179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.805861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.569 [2024-07-25 14:54:26.814759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.569 [2024-07-25 14:54:26.815490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.569 [2024-07-25 14:54:26.815533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.569 [2024-07-25 14:54:26.815554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.569 [2024-07-25 14:54:26.815906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.569 [2024-07-25 14:54:26.816083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.569 [2024-07-25 14:54:26.816091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.569 [2024-07-25 14:54:26.816097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.569 [2024-07-25 14:54:26.818779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.570 [2024-07-25 14:54:26.827684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.570 [2024-07-25 14:54:26.828421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-07-25 14:54:26.828463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.570 [2024-07-25 14:54:26.828484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.570 [2024-07-25 14:54:26.828814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.570 [2024-07-25 14:54:26.828987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.570 [2024-07-25 14:54:26.828995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.570 [2024-07-25 14:54:26.829001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.570 [2024-07-25 14:54:26.831691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.570 [2024-07-25 14:54:26.840474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.570 [2024-07-25 14:54:26.841193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-07-25 14:54:26.841236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.570 [2024-07-25 14:54:26.841257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.570 [2024-07-25 14:54:26.841837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.570 [2024-07-25 14:54:26.842015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.570 [2024-07-25 14:54:26.842023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.570 [2024-07-25 14:54:26.842032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.570 [2024-07-25 14:54:26.844732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.570 [2024-07-25 14:54:26.853419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.570 [2024-07-25 14:54:26.854130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.570 [2024-07-25 14:54:26.854173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.570 [2024-07-25 14:54:26.854193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.570 [2024-07-25 14:54:26.854772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.570 [2024-07-25 14:54:26.854993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.570 [2024-07-25 14:54:26.855001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.570 [2024-07-25 14:54:26.855007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.570 [2024-07-25 14:54:26.857825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.829 [2024-07-25 14:54:26.866399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.867120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.867163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.867184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.867457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.867635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.867643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.867649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.870487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.879477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.880205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.880248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.880269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.880849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.881171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.881179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.881185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.883974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.892466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.893190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.893232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.893253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.893831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.894150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.894159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.894165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.896845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.905287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.906019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.906074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.906097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.906677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.907208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.907216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.907222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.909901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.918179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.918624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.918665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.918687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.919132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.919305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.919313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.919319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.922000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.931052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.931765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.931808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.931829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.932224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.932400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.932408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.932414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.935105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.943994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.944737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.944779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.944800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.945249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.945422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.945430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.945436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.948172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.956914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.957561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.957577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.957584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.957756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.957928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.957936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.957942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.960633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.969823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.970534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.970577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.970598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.971053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.971271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.971282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.971291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.830 [2024-07-25 14:54:26.975361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.830 [2024-07-25 14:54:26.983197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.830 [2024-07-25 14:54:26.983954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.830 [2024-07-25 14:54:26.983997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.830 [2024-07-25 14:54:26.984018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.830 [2024-07-25 14:54:26.984405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.830 [2024-07-25 14:54:26.984578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.830 [2024-07-25 14:54:26.984586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.830 [2024-07-25 14:54:26.984592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:26.987318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:26.996102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:26.996841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:26.996883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:26.996904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:26.997251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:26.997429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:26.997438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:26.997444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.000182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.008977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.009668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.009710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.009731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.010174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.010358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.010366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.010372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.013056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.021987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.022701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.022744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.022774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.023257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.023430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.023438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.023444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.026162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.034878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.035524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.035567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.035588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.036179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.036569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.036577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.036584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.039272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.047808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.048559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.048601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.048623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.049214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.049592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.049600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.049606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.052321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.060681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.061354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.061396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.061417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.061997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.062331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.062340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.062346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.066416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.074194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.074893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.074957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.075396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.075570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.075578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.075583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.078326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.087057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.087773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.087815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.087837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.088428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.088725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.088733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.088739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.091461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.099993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.100727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.100770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.100792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.831 [2024-07-25 14:54:27.101382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.831 [2024-07-25 14:54:27.101717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.831 [2024-07-25 14:54:27.101725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.831 [2024-07-25 14:54:27.101731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.831 [2024-07-25 14:54:27.104450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.831 [2024-07-25 14:54:27.112934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.831 [2024-07-25 14:54:27.113667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.831 [2024-07-25 14:54:27.113710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:06.831 [2024-07-25 14:54:27.113731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:06.832 [2024-07-25 14:54:27.114325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:06.832 [2024-07-25 14:54:27.114907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.832 [2024-07-25 14:54:27.114931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.832 [2024-07-25 14:54:27.114951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.832 [2024-07-25 14:54:27.117756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.126156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.126873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.126889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.126896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.127077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.127255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.127263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.127270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.130072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.139214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.139906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.139948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.139969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.140559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.140877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.140885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.140891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.143691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.152244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.152972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.153014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.153057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.153508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.153691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.153699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.153704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.157587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.165782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.166478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.166523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.166545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.166839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.167011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.167019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.167025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.169750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.178707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.179428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.179471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.179492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.179818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.179991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.179998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.180004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.182691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.191617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.192141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.192158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.192164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.192336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.192512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.192522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.192528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.195237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.204414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.092 [2024-07-25 14:54:27.205105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.092 [2024-07-25 14:54:27.205148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.092 [2024-07-25 14:54:27.205169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.092 [2024-07-25 14:54:27.205637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.092 [2024-07-25 14:54:27.205800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.092 [2024-07-25 14:54:27.205807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.092 [2024-07-25 14:54:27.205813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.092 [2024-07-25 14:54:27.208509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.092 [2024-07-25 14:54:27.217286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.218003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.218059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.218081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.218658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.219164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.219172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.219178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.221859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.230147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.230855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.230897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.230918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.231511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.231820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.231828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.231834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.234516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.243155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.243867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.243909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.243930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.244524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.244983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.244991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.244996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.247681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.255971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.256668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.256710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.256731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.257324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.257907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.257931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.257937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.260674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.268859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.269575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.269618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.269640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.269990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.270168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.270176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.270182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.272866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.281764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.282449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.282491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.282513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.282882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.283059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.283067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.283073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.285811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.294554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.295270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.295315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.295337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.295917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.296209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.296218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.296224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.298907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.307353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.308071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.308116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.308138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.308522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.308685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.308693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.308699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.311396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.320179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.320894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.320937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.093 [2024-07-25 14:54:27.320958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.093 [2024-07-25 14:54:27.321552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.093 [2024-07-25 14:54:27.322102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.093 [2024-07-25 14:54:27.322110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.093 [2024-07-25 14:54:27.322120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.093 [2024-07-25 14:54:27.324798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.093 [2024-07-25 14:54:27.333016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.093 [2024-07-25 14:54:27.333707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.093 [2024-07-25 14:54:27.333750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.094 [2024-07-25 14:54:27.333771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.094 [2024-07-25 14:54:27.334170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.094 [2024-07-25 14:54:27.334344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.094 [2024-07-25 14:54:27.334352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.094 [2024-07-25 14:54:27.334358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.094 [2024-07-25 14:54:27.337040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.094 [2024-07-25 14:54:27.345921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.094 [2024-07-25 14:54:27.346626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.094 [2024-07-25 14:54:27.346668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.094 [2024-07-25 14:54:27.346690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.094 [2024-07-25 14:54:27.347240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.094 [2024-07-25 14:54:27.347412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.094 [2024-07-25 14:54:27.347420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.094 [2024-07-25 14:54:27.347426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.094 [2024-07-25 14:54:27.350167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.094 [2024-07-25 14:54:27.358736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.094 [2024-07-25 14:54:27.359445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.094 [2024-07-25 14:54:27.359488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.094 [2024-07-25 14:54:27.359509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.094 [2024-07-25 14:54:27.359984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.094 [2024-07-25 14:54:27.360161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.094 [2024-07-25 14:54:27.360175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.094 [2024-07-25 14:54:27.360181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.094 [2024-07-25 14:54:27.362862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.094 [2024-07-25 14:54:27.371565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.094 [2024-07-25 14:54:27.372282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.094 [2024-07-25 14:54:27.372302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.094 [2024-07-25 14:54:27.372309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.094 [2024-07-25 14:54:27.372486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.094 [2024-07-25 14:54:27.372664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.094 [2024-07-25 14:54:27.372672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.094 [2024-07-25 14:54:27.372678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.094 [2024-07-25 14:54:27.375514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.384732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.385367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.385413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.385434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.386015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.386223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.386232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.386239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.389037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.397758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.398376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.398420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.398441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.399016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.399194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.399203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.399209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.401891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.410624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.411334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.411378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.411399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.411979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.412368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.412376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.412382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.415067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.423487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.424195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.424238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.424260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.424766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.424938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.424946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.424952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.427643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.436359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.437057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.437100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.437121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.437492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.437665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.437673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.437679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.440378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.449226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.449928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.449971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.449992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.450542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.450715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.450723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.450729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.453417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.462059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.462770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.462812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.462833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.463223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.463395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.354 [2024-07-25 14:54:27.463403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.354 [2024-07-25 14:54:27.463409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.354 [2024-07-25 14:54:27.466139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.354 [2024-07-25 14:54:27.475252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.354 [2024-07-25 14:54:27.475951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.354 [2024-07-25 14:54:27.475994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.354 [2024-07-25 14:54:27.476015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.354 [2024-07-25 14:54:27.476463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.354 [2024-07-25 14:54:27.476717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.476728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.476737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.480798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.488453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.489141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.489184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.489205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.489438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.489610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.489618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.489623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.492441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.501655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.502371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.502389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.502399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.502577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.502755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.502764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.502771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.505611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.514812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.515511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.515555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.515576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.516164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.516442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.516453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.516462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.520525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.528409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.529129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.529145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.529152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.529329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.529508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.529515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.529522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.532340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.541446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.542136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.542179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.542200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.542779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.543001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.543024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.543030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.545830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.554380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.555082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.555098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.555104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.555276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.555449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.555456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.555462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.558173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.567246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.567929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.567971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.567992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.568448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.568631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.568639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.568646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 [2024-07-25 14:54:27.571331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 [2024-07-25 14:54:27.580077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 [2024-07-25 14:54:27.580729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.355 [2024-07-25 14:54:27.580772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.355 [2024-07-25 14:54:27.580793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.355 [2024-07-25 14:54:27.581389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.355 [2024-07-25 14:54:27.581810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.355 [2024-07-25 14:54:27.581818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.355 [2024-07-25 14:54:27.581824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2479331 Killed "${NVMF_APP[@]}" "$@" 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.355 [2024-07-25 14:54:27.584621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2480743 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2480743 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2480743 ']' 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.355 [2024-07-25 14:54:27.593182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.355 14:54:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.355 [2024-07-25 14:54:27.593899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-07-25 14:54:27.593916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.356 [2024-07-25 14:54:27.593923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.356 [2024-07-25 14:54:27.594107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.356 [2024-07-25 14:54:27.594284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.356 [2024-07-25 14:54:27.594294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.356 [2024-07-25 14:54:27.594300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.356 [2024-07-25 14:54:27.597143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.356 [2024-07-25 14:54:27.606374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.356 [2024-07-25 14:54:27.607226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-07-25 14:54:27.607243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.356 [2024-07-25 14:54:27.607250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.356 [2024-07-25 14:54:27.607427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.356 [2024-07-25 14:54:27.607605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.356 [2024-07-25 14:54:27.607612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.356 [2024-07-25 14:54:27.607619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.356 [2024-07-25 14:54:27.610468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.356 [2024-07-25 14:54:27.619533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.356 [2024-07-25 14:54:27.620260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-07-25 14:54:27.620276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.356 [2024-07-25 14:54:27.620283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.356 [2024-07-25 14:54:27.620460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.356 [2024-07-25 14:54:27.620639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.356 [2024-07-25 14:54:27.620648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.356 [2024-07-25 14:54:27.620654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.356 [2024-07-25 14:54:27.623497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.356 [2024-07-25 14:54:27.632745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.356 [2024-07-25 14:54:27.633466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.356 [2024-07-25 14:54:27.633483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.356 [2024-07-25 14:54:27.633490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.356 [2024-07-25 14:54:27.633668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.356 [2024-07-25 14:54:27.633846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.356 [2024-07-25 14:54:27.633854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.356 [2024-07-25 14:54:27.633860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.356 [2024-07-25 14:54:27.636703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.356 [2024-07-25 14:54:27.639474] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:27:07.356 [2024-07-25 14:54:27.639513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.616 [2024-07-25 14:54:27.645941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.646697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.646713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.646721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.646899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.647082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.647091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.647097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.649937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.658960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.659627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.659643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.659650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.659827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.660006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.660014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.660021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.662795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.616 [2024-07-25 14:54:27.672066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.672761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.672778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.672785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.672963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.673148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.673156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.673163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.675960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.685099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.685772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.685789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.685796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.685974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.686158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.686167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.686173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.689018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.698138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.698380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.616 [2024-07-25 14:54:27.698799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.698816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.698824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.699006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.699191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.699200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.699206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.701972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.711304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.711957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.711974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.711982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.712164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.712348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.712359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.712366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.715202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.724353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.724976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.724993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.725000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.725189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.725362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.725370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.725377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.728191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.737325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.737930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.737948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.737956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.738139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.738332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.738340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.738355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.616 [2024-07-25 14:54:27.741125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.616 [2024-07-25 14:54:27.750397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.616 [2024-07-25 14:54:27.751023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.616 [2024-07-25 14:54:27.751039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.616 [2024-07-25 14:54:27.751052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.616 [2024-07-25 14:54:27.751247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.616 [2024-07-25 14:54:27.751425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.616 [2024-07-25 14:54:27.751434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.616 [2024-07-25 14:54:27.751440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.754244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.763386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.764049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.764065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.764073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.764251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.764428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.764436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.764442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.767299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.774109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.617 [2024-07-25 14:54:27.774136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.617 [2024-07-25 14:54:27.774143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.617 [2024-07-25 14:54:27.774148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.617 [2024-07-25 14:54:27.774153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.617 [2024-07-25 14:54:27.774207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.617 [2024-07-25 14:54:27.774292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.617 [2024-07-25 14:54:27.774293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.617 [2024-07-25 14:54:27.776473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.777105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.777122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.777129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.777312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.777491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.777500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.777506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.780348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.789604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.790340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.790360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.790368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.790548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.790726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.790735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.790741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.793581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.802788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.803524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.803544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.803551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.803730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.803907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.803916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.803923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.806764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.815972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.816633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.816651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.816659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.816837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.817016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.817025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.817037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.819873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.829101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.829742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.829761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.829768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.829945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.830129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.830138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.830145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.832977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.842227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.842813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.842830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.842837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.843016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.843198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.843207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.843213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.846050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.855423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.856153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.856170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.856177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.856354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.856532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.856540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.856547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.859384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.868588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.869303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.617 [2024-07-25 14:54:27.869320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.617 [2024-07-25 14:54:27.869327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.617 [2024-07-25 14:54:27.869504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.617 [2024-07-25 14:54:27.869682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.617 [2024-07-25 14:54:27.869690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.617 [2024-07-25 14:54:27.869697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.617 [2024-07-25 14:54:27.872535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.617 [2024-07-25 14:54:27.881738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.617 [2024-07-25 14:54:27.882437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-25 14:54:27.882453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.618 [2024-07-25 14:54:27.882460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.618 [2024-07-25 14:54:27.882638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.618 [2024-07-25 14:54:27.882816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.618 [2024-07-25 14:54:27.882823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.618 [2024-07-25 14:54:27.882830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.618 [2024-07-25 14:54:27.885668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.618 [2024-07-25 14:54:27.894869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.618 [2024-07-25 14:54:27.895515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.618 [2024-07-25 14:54:27.895532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.618 [2024-07-25 14:54:27.895538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.618 [2024-07-25 14:54:27.895715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.618 [2024-07-25 14:54:27.895893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.618 [2024-07-25 14:54:27.895901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.618 [2024-07-25 14:54:27.895907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.618 [2024-07-25 14:54:27.898739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.877 [2024-07-25 14:54:27.907943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.877 [2024-07-25 14:54:27.908662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.877 [2024-07-25 14:54:27.908679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.877 [2024-07-25 14:54:27.908686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.877 [2024-07-25 14:54:27.908866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.877 [2024-07-25 14:54:27.909050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.877 [2024-07-25 14:54:27.909058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.877 [2024-07-25 14:54:27.909064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.877 [2024-07-25 14:54:27.911893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.877 [2024-07-25 14:54:27.921095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.877 [2024-07-25 14:54:27.921670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.877 [2024-07-25 14:54:27.921686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.877 [2024-07-25 14:54:27.921693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.877 [2024-07-25 14:54:27.921870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.877 [2024-07-25 14:54:27.922053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.877 [2024-07-25 14:54:27.922062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.877 [2024-07-25 14:54:27.922068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.877 [2024-07-25 14:54:27.924895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.877 [2024-07-25 14:54:27.934281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.877 [2024-07-25 14:54:27.934926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.877 [2024-07-25 14:54:27.934942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.877 [2024-07-25 14:54:27.934948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.877 [2024-07-25 14:54:27.935129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.877 [2024-07-25 14:54:27.935307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.877 [2024-07-25 14:54:27.935315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.877 [2024-07-25 14:54:27.935321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.877 [2024-07-25 14:54:27.938161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.877 [2024-07-25 14:54:27.947350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.877 [2024-07-25 14:54:27.947999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.877 [2024-07-25 14:54:27.948015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.877 [2024-07-25 14:54:27.948022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.877 [2024-07-25 14:54:27.948203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.877 [2024-07-25 14:54:27.948381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.877 [2024-07-25 14:54:27.948389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.877 [2024-07-25 14:54:27.948398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.877 [2024-07-25 14:54:27.951236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.877 [2024-07-25 14:54:27.960438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.877 [2024-07-25 14:54:27.961012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.877 [2024-07-25 14:54:27.961028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:27.961034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:27.961216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:27.961393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:27.961401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:27.961407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:27.964240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:27.973614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:27.974315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:27.974331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:27.974338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:27.974515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:27.974694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:27.974702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:27.974707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:27.977543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:27.986748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:27.987377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:27.987393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:27.987400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:27.987577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:27.987754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:27.987762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:27.987768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:27.990608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:27.999850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.000484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.000505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.000512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.000688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.000865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.000873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.000879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.003713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.012917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.013571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.013587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.013594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.013771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.013947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.013955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.013961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.016798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.026010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.026588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.026604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.026611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.026788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.026966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.026974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.026980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.029816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.039189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.039831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.039846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.039853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.040030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.040217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.040226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.040232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.043066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.052269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.052915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.052931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.052937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.053117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.053299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.053307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.053313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.056148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.065351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.066032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.066054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.066061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.066237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.066415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.066423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.066429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.069263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.078465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.079173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.079189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.878 [2024-07-25 14:54:28.079196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.878 [2024-07-25 14:54:28.079372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.878 [2024-07-25 14:54:28.079549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.878 [2024-07-25 14:54:28.079557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.878 [2024-07-25 14:54:28.079563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.878 [2024-07-25 14:54:28.082401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.878 [2024-07-25 14:54:28.091594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.878 [2024-07-25 14:54:28.092307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.878 [2024-07-25 14:54:28.092323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.092332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.092511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.092689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.092697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.092704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.095537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.879 [2024-07-25 14:54:28.104729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.879 [2024-07-25 14:54:28.105365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.879 [2024-07-25 14:54:28.105381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.105389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.105566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.105745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.105754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.105761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.108600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.879 [2024-07-25 14:54:28.117809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.879 [2024-07-25 14:54:28.118505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.879 [2024-07-25 14:54:28.118521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.118529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.118707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.118886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.118895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.118902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.121740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.879 [2024-07-25 14:54:28.130953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.879 [2024-07-25 14:54:28.131653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.879 [2024-07-25 14:54:28.131669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.131679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.131857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.132035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.132047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.132053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.134885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.879 [2024-07-25 14:54:28.144105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.879 [2024-07-25 14:54:28.144783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.879 [2024-07-25 14:54:28.144798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.144805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.144981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.145163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.145172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.145177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.148008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.879 [2024-07-25 14:54:28.157205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.879 [2024-07-25 14:54:28.157892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.879 [2024-07-25 14:54:28.157908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:07.879 [2024-07-25 14:54:28.157915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:07.879 [2024-07-25 14:54:28.158095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:07.879 [2024-07-25 14:54:28.158272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.879 [2024-07-25 14:54:28.158280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.879 [2024-07-25 14:54:28.158287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.879 [2024-07-25 14:54:28.161122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.139 [2024-07-25 14:54:28.170312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.139 [2024-07-25 14:54:28.170946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.139 [2024-07-25 14:54:28.170961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.139 [2024-07-25 14:54:28.170968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.139 [2024-07-25 14:54:28.171148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.139 [2024-07-25 14:54:28.171331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.139 [2024-07-25 14:54:28.171343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.139 [2024-07-25 14:54:28.171350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.139 [2024-07-25 14:54:28.174186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.139 [2024-07-25 14:54:28.183376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.139 [2024-07-25 14:54:28.184073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.139 [2024-07-25 14:54:28.184089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.139 [2024-07-25 14:54:28.184096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.139 [2024-07-25 14:54:28.184274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.139 [2024-07-25 14:54:28.184452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.139 [2024-07-25 14:54:28.184460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.139 [2024-07-25 14:54:28.184466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.139 [2024-07-25 14:54:28.187300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.139 [2024-07-25 14:54:28.196490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.139 [2024-07-25 14:54:28.197229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.139 [2024-07-25 14:54:28.197245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.139 [2024-07-25 14:54:28.197252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.139 [2024-07-25 14:54:28.197429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.139 [2024-07-25 14:54:28.197606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.139 [2024-07-25 14:54:28.197614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.139 [2024-07-25 14:54:28.197620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.139 [2024-07-25 14:54:28.200475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.139 [2024-07-25 14:54:28.209677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.139 [2024-07-25 14:54:28.210394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.139 [2024-07-25 14:54:28.210411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.210418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.210595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.210777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.210785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.210791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.213626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.222820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.223540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.223556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.223562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.223739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.223917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.223925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.223931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.226800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.235997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.236697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.236714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.236720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.236897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.237078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.237086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.237092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.239923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.249121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.249759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.249774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.249781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.249959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.250140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.250148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.250154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.252987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.262180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.262912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.262928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.262935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.263118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.263295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.263304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.263310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.266145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.275340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.275983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.275998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.276005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.276186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.276364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.276373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.276379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.279215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.288410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.289106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.289122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.289130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.289307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.289484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.289493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.289499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.292525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.301562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.302282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.302299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.302306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.302484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.302662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.302670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.302679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.305516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.314718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.315422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.315438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.315445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.315623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.315800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.315808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.315815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.318650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.327847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.328566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.328582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.328590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.328767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.328944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.328952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.328959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.331793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.340990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.341634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.341650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.341657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.341834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.342011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.342019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.342025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.344859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.354052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.354774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.354790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.354797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.354974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.355155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.355164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.355170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.357999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.367204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.367898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.367914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.367920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.368101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.368279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.368287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.368293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.371127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.380325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.381050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.381067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.381073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.381251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.381429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.381438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.381445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.384280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.393479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.394198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.394215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.394223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.394406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.394589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.394598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.394604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.397439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.406630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.407329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.407345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.407352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.407529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.407706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.407714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.407721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.410577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.140 [2024-07-25 14:54:28.419794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.140 [2024-07-25 14:54:28.420445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.140 [2024-07-25 14:54:28.420461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.140 [2024-07-25 14:54:28.420469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.140 [2024-07-25 14:54:28.420647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.140 [2024-07-25 14:54:28.420825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.140 [2024-07-25 14:54:28.420832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.140 [2024-07-25 14:54:28.420839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.140 [2024-07-25 14:54:28.423673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.399 [2024-07-25 14:54:28.432876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.399 [2024-07-25 14:54:28.433605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.399 [2024-07-25 14:54:28.433621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.399 [2024-07-25 14:54:28.433628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.399 [2024-07-25 14:54:28.433806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.399 [2024-07-25 14:54:28.433983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.399 [2024-07-25 14:54:28.433991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.433998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.436836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 [2024-07-25 14:54:28.446019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.446535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.446551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.446558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.446735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.446911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.446919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.446926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.400 [2024-07-25 14:54:28.449755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.459110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.459755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.459772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.459781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.459958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.460142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.460150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.460156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.462989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 [2024-07-25 14:54:28.472194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.472858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.472874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.472881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.473063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.473242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.473251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.473257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.476097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.485301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.485920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.485936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.485943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.486125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.486302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.486310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.486316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.486944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.400 [2024-07-25 14:54:28.489152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 [2024-07-25 14:54:28.498345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.498887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.498903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.498909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.499090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.499267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.499275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.499282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.502116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.511481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.512190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.512207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.512214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.512390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.512571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.512580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.512586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 [2024-07-25 14:54:28.515424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 [2024-07-25 14:54:28.524641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.525358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.525377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.525385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.525564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.525741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.525749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.525756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 Malloc0 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.528603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 [2024-07-25 14:54:28.537795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 [2024-07-25 14:54:28.538509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.400 [2024-07-25 14:54:28.538526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d980 with addr=10.0.0.2, port=4420 00:27:08.400 [2024-07-25 14:54:28.538533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d980 is same with the state(5) to be set 00:27:08.400 [2024-07-25 14:54:28.538710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d980 (9): Bad file descriptor 00:27:08.400 [2024-07-25 14:54:28.538888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.400 [2024-07-25 14:54:28.538896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.400 [2024-07-25 14:54:28.538903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.541736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.400 [2024-07-25 14:54:28.550831] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.400 [2024-07-25 14:54:28.550931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.400 14:54:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2479813 00:27:08.400 [2024-07-25 14:54:28.622634] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:18.426 00:27:18.426 Latency(us) 00:27:18.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:18.426 Verification LBA range: start 0x0 length 0x4000 00:27:18.426 Nvme1n1 : 15.01 8318.81 32.50 12183.01 0.00 6222.92 861.94 31229.33 00:27:18.426 =================================================================================================================== 00:27:18.426 Total : 8318.81 32.50 12183.01 0.00 6222.92 861.94 31229.33 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.426 rmmod nvme_tcp 00:27:18.426 rmmod nvme_fabrics 00:27:18.426 rmmod nvme_keyring 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2480743 ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2480743 ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2480743' 00:27:18.426 killing process with pid 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2480743 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.426 14:54:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.366 14:54:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.366 00:27:19.366 real 0m25.918s 00:27:19.366 user 1m2.377s 00:27:19.366 sys 0m6.147s 00:27:19.366 14:54:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.366 14:54:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:19.366 ************************************ 00:27:19.366 END TEST nvmf_bdevperf 00:27:19.366 ************************************ 00:27:19.366 14:54:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:19.366 14:54:39 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:19.366 14:54:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:19.366 14:54:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.366 14:54:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.366 ************************************ 00:27:19.366 START TEST nvmf_target_disconnect 00:27:19.366 ************************************ 00:27:19.366 14:54:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:19.625 * Looking for test storage... 00:27:19.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.625 14:54:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.626 14:54:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.911 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.912 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.912 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.912 14:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:27:24.912 00:27:24.912 --- 10.0.0.2 ping statistics --- 00:27:24.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.912 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:24.912 00:27:24.912 --- 10.0.0.1 ping statistics --- 00:27:24.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.912 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.912 ************************************ 00:27:24.912 START TEST nvmf_target_disconnect_tc1 00:27:24.912 ************************************ 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.912 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.912 [2024-07-25 14:54:45.164239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-25 14:54:45.164342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb4e60 with addr=10.0.0.2, port=4420 00:27:24.912 [2024-07-25 14:54:45.164389] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:24.912 [2024-07-25 14:54:45.164417] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:24.912 [2024-07-25 14:54:45.164424] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:24.912 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:24.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:24.912 Initializing NVMe Controllers 00:27:24.912 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:24.913 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.913 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.913 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.913 00:27:24.913 real 0m0.103s 00:27:24.913 user 0m0.046s 00:27:24.913 sys 0m0.056s 00:27:24.913 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:24.913 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.913 ************************************ 00:27:24.913 END TEST nvmf_target_disconnect_tc1 00:27:24.913 ************************************ 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:25.173 ************************************ 00:27:25.173 START TEST nvmf_target_disconnect_tc2 00:27:25.173 ************************************ 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:25.173 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2485872 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2485872 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2485872 ']' 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.174 14:54:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.174 [2024-07-25 14:54:45.299091] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:27:25.174 [2024-07-25 14:54:45.299146] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.174 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.174 [2024-07-25 14:54:45.367724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.174 [2024-07-25 14:54:45.440292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.174 [2024-07-25 14:54:45.440335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.174 [2024-07-25 14:54:45.440342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.174 [2024-07-25 14:54:45.440347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.174 [2024-07-25 14:54:45.440352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.174 [2024-07-25 14:54:45.440483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:25.174 [2024-07-25 14:54:45.440581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:25.174 [2024-07-25 14:54:45.440666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:25.174 [2024-07-25 14:54:45.440667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 Malloc0 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 [2024-07-25 14:54:46.168857] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 [2024-07-25 14:54:46.197851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2485932 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:26.113 14:54:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.113 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.024 14:54:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2485872 00:27:28.024 14:54:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 [2024-07-25 14:54:48.226130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Read completed with error (sct=0, sc=8) 00:27:28.024 starting I/O failed 00:27:28.024 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 [2024-07-25 14:54:48.226343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Write completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 Read completed with error (sct=0, sc=8) 00:27:28.025 starting I/O failed 00:27:28.025 [2024-07-25 14:54:48.226532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.025 [2024-07-25 14:54:48.227107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.227124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.227578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.227609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.228139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.228170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.228651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.228681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.229205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.229236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.229721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.229751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.230163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.230193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.230567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.230596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.231073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.231104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.231573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.231602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.232156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.232188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.232612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.232641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.233114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.233145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.233605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.233634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.234170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.234201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.234593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.234622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.234825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.234855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.235339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.235370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.235771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.235806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.025 [2024-07-25 14:54:48.236350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.025 [2024-07-25 14:54:48.236381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.025 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.236794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.236808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.237290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.237321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.237792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.237821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.238313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.238344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.238773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.238803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.239350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.239381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.239954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.239984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.240439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.240469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.240940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.240969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.241447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.241477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.242026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.242067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.242548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.242561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.243067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.243081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.243512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.243526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.244039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.244057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.244543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.244556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.245053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.245067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.245569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.245582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.246012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.246025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.246495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.246526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.247067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.247568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.247597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.248145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.248176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.248720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.248749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.249003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.249032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.249586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.249615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.250145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.250717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.250745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.251223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.251254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.251743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.251772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.252292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.252322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.252808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.252837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.253316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.253346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.253820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.253849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.254390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.254419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.254883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.026 [2024-07-25 14:54:48.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.026 qpair failed and we were unable to recover it. 00:27:28.026 [2024-07-25 14:54:48.255459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.255489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.256007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.256036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.256584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.256617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.257166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.257202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.257462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.257492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.258033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.258072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.258540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.258570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.259034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.259078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.259580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.259610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.260081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.260111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.260685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.260715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.261277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.261291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.261737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.262326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.262356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.262823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.262853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.263325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.263355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.263838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.263867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.264395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.264426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.264882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.264896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.265332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.265346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.265806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.265835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.266399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.266430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.266949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.266979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.267446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.267477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.267952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.267981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.268502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.268532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.269006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.269036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.269566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.269596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.270135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.270150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.270583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.270597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.271097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.271127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.271597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.271627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.272168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.272218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.272689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.272718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.027 [2024-07-25 14:54:48.273218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.027 [2024-07-25 14:54:48.273248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.027 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.273809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.273839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.274240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.274270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.274785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.274815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.275216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.275252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.275619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.275633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.276146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.276176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.276697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.276726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.277271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.277302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.277773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.277802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.278282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.278313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.278728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.278758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.279228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.279257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.279804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.279834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.280352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.280382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.280900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.280929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.281496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.281526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.282066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.282096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.282566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.282597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.283099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.283113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.283568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.283598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.283993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.284022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.284503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.284534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.285017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.285054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.285580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.285611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.286132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.286163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.286733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.286762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.287289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.287319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.287843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.287873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.288338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.288352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.288861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.288891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.289433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.289464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.289993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.290022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.290496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.290513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.291006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.028 [2024-07-25 14:54:48.291022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.028 qpair failed and we were unable to recover it. 00:27:28.028 [2024-07-25 14:54:48.291527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.291559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.292099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.292129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.292695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.292731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.293264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.293294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.293814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.293844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.294320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.294350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.294872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.294886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.295397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.295428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.295985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.296015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.296556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.296587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.297041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.297082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.297621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.297651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.298194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.298225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.298752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.298782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.299272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.299303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.299725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.299755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.300278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.029 [2024-07-25 14:54:48.300309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.029 qpair failed and we were unable to recover it. 00:27:28.029 [2024-07-25 14:54:48.300839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.300853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.301081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.301095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.301562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.301592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.302072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.302103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.302579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.302608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.303128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.303159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.303660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.303691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.304210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.304240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.304765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.304794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.305196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.305226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.305629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.305659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.306132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.306163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.306685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.306715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.307267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.307298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.307844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.307874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.308402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.308954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.308983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.309456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.309487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.310076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.310106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.310663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.310693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.311259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.311289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.311762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.311791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.312338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.312368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.312832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.312861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.313338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.313352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.030 [2024-07-25 14:54:48.313802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.030 [2024-07-25 14:54:48.313832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.030 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.314384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.314891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.314921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.315192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.315223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.315628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.315642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.316150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.316164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.316656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.316685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.317159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.317189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.317733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.317763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.318016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.318055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.318521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.318550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.318991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.319005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.319518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.319549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.320094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.320125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.320615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.320645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.321221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.321252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.321718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-07-25 14:54:48.321747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-07-25 14:54:48.322225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.322255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.322694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.322723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.323179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.323210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.323746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.323760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.324191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.324205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.324700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.324714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.325176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.325190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.325612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.325642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.326165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.326194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.326617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.326647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.327216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.327246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.327733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.327768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.328236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.328266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.328736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.328765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.329235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.329266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.329784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.329813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.330291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.330330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.330768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.330782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.331251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.331283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.331826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.331856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.332331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.332345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.332778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.332791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.333170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.333184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.333553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.333567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.334061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.334092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.334649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.334679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.335213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.335244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.335777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.335806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.336273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.336304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.336768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.336798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.337341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.337372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.337846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.337875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.338461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.338491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.338916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.338945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.339485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.339516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.340107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.340138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-07-25 14:54:48.340672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-07-25 14:54:48.340702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.341283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.341323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.341816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.341846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.342336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.342367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.342914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.342928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.343382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.343414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.343975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.344005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.344480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.344510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.345060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.345090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.345502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.345532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.346062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.346092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.346562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.346592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.347081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.347113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.347531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.347561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.347953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.347983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.348454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.348485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.348968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.349003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.349553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.349584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.350085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.350116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.350684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.350714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.351253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.351284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.351832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.351861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.352385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.352415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.352963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.352993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.353537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.353567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.354068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.354098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.354510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.354540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.354953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.354967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.355405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.355436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.355929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.355959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.356418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.356449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.356995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.357025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.357495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.357533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.358069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.358100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.358619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.358649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.359188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.359218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.359739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.359768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.360317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.360348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-07-25 14:54:48.360847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-07-25 14:54:48.360877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.361347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.361377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.361925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.361954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.362501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.362532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.363007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.363571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.363607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.364075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.364106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.364520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.364550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.364745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.364775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.365301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.365331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.365787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.365817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.366363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.366393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.366869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.366899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.367444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.367474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.367740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.367770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.368327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.368358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.368820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.368850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.369308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.369338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.369823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.369853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.370402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.370432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.370960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.370989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.371532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.371563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.372107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.372139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.372674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.372704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.373173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.373203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.373728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.373758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.374309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.374340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.374910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.374940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.375411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.375440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.375911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.375941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.376491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.376540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.376984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.377014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.377508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.377539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.378055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.378086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.378543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.378572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.379115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.379146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.379615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.379646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.380137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.380168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-07-25 14:54:48.380708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-07-25 14:54:48.380738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.381277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.381314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.381831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.381861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.382431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.382474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.382921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.382950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.383435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.383473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.383990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.384004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.384426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.384893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.384928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.385474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.385507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.386027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.386066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.386606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.386635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.387092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.387123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.387671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.387700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.388094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.388125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.388685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.388715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.389262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.389292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.389818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.389848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.390312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.390343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.390874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.390904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.391470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.391500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.392029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.392070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.392565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.392594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.392996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.393025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.393564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.393594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.394080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.394112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.394524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.394554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.395095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.395127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.395665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.395695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.395950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.395980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.396451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.396481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.397003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.397033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.397512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.397542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.398016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.398053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.398552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.398582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.399146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.399182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.399738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.399768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.400196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-07-25 14:54:48.400227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-07-25 14:54:48.400767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.400797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.401317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.401347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.401814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.401844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.402372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.402403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.402956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.402986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.403410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.403441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.403932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.403962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.404426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.404456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.404934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.404948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.405465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.405496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.405904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.405934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.406474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.406504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.406970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.406999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.407470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.407501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.407966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.407995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.408547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.408578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.409057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.409087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.409628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.409658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.410202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.410232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.410762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.410792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.411331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.411361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.411844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.411873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.412282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.412312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.412882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.412911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.413432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.413463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.413995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.414025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.414555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.414585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.415081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.415112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.415596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.415625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.416170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.416200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-07-25 14:54:48.416741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-07-25 14:54:48.416771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.417281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.417311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.417830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.417860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.418430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.418460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.418943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.418973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.419518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.419550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.420123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.420153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.420507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.420537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.421052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.421069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.421580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.421610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.422160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.422190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.422715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.422751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.423252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.423266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.423721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.423734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.424225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.424256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.424780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.424793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.425229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.425259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.425740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.425769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.426241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.426271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.426682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.426712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.427188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.427218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.427758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.427788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.428208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.428238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.428778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.428807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.429353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.429383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.429912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.429942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.430409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.430439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.430928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.430958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.431448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.431487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.431930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.431960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.432422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.432453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.432973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.433003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.433576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.433606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.434135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.434165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.434624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.434654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.435195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.435226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.435753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.435783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.436338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.436903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-07-25 14:54:48.436932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-07-25 14:54:48.437446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.437476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.438040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.438077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.438608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.439208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.439239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.439771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.439801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.440322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.440871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.440900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.441372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.441402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.441957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.442548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.442579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.443008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.443037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.443520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.443551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.444004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.444033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.444585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.444616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.445091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.445122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.445642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.445672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.446150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.446181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.446686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.446715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.447290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.447320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.447801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.447830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.448321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.448352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.448900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.448930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.449464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.449496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.449983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.450013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.450498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.450528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.451055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.451086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.451506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.451535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.452008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.452038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.452579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.452609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.453130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.453161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.453634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.453664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.454207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.454238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.454705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.454735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.454934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.454964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.455482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.456060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.456091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.456634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.456664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-07-25 14:54:48.457068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-07-25 14:54:48.457105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.457586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.457615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.458164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.458195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.458652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.458682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.459225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.459255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.459741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.459771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.460243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.460273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.460813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.460843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.461383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.461414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.461946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.461975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.462436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.462466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.463020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.463076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.463599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.463629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.464154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.464185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.464710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.464740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.465260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.465301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.465816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.465845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.466314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.466345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.466870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.466900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.467374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.467404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.467946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.467976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.468526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.468557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.469127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.469158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.469708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.469737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.470201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.470232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.470777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.470806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.471376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.471408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.471894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.471924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.472476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.472507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.472973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.473002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.473555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.473586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.474065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.474095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.474570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.474600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.474853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.474882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.475356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.475386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.475638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.475667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.476134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.476165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.476569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.476599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.306 [2024-07-25 14:54:48.477115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.306 [2024-07-25 14:54:48.477128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.306 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.477639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.477653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.478140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.478155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.478637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.478651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.479119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.479149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.479692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.479722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.480199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.480224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.480644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.480658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.481177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.481192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.481630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.481644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.482084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.482114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.482628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.482641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.483157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.483171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.483528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.483542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.484026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.484040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.484501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.484514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.484942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.484956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.485468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.485482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.485967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.485980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.486355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.486369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.486752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.486765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.487192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.487205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.487651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.487664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.488178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.488208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.488702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.488731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.489290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.489303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.489728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.489742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.490195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.490209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.490658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.490688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.491080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.491111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.491658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.491692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.492231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.492245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.492692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.307 [2024-07-25 14:54:48.492705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.307 qpair failed and we were unable to recover it. 00:27:28.307 [2024-07-25 14:54:48.493145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.493159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.493646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.493660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.494144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.494159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.494614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.494644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.495121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.495151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.495698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.495727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.496201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.496231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.496775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.496804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.497338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.497351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.497886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.497914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.498463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.498494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.498965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.498979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.499487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.499501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.499930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.499944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.500430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.500443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.500713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.500727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.501250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.501264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.501694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.501707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.502195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.502209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.502675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.502689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.503146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.503160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.503676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.503706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.504173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.504187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.504640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.505019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.505033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.505491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.505505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.505954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.505968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.506449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.506480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.507001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.507014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.507451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.507464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.507900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.507914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.508272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.508286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.508792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.508806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.509244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.509258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.509691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.509705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.510154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.510185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.510706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.510735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.511248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.308 [2024-07-25 14:54:48.511262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.308 qpair failed and we were unable to recover it. 00:27:28.308 [2024-07-25 14:54:48.511763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.511778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.512149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.512163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.512650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.512664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.513186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.513200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.513638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.513667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.514136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.514150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.514583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.514596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.515079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.515093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.515602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.515616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.516069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.516083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.516543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.516556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.517059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.517074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.517594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.517623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.518033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.518075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.518613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.518642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.519107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.519121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.519519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.519548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.519945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.519974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.520507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.520521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.521032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.521050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.521594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.521608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.522099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.522128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.522686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.522715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.523112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.523125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.523513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.523526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.523952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.523965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.524323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.524774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.524790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.525165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.525178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.525708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.525722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.526236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.526268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.526756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.526786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.527243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.527257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.527697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.527711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.528215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.528246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.528711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.528746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.529258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.529272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.529658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.309 [2024-07-25 14:54:48.529671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.309 qpair failed and we were unable to recover it. 00:27:28.309 [2024-07-25 14:54:48.530128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.530142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.530594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.530607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.531062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.531092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.531510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.531540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.531997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.532011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.532456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.532471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.532895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.532924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.533178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.533208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.533672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.533702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.534240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.534253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.534767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.534796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.535259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.535272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.535725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.535738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.536222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.536236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.536670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.536699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.537242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.537256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.537749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.537763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.538226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.538240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.538682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.538711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.539232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.539262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.539745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.539775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.540194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.540208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.540703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.540716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.541153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.541183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.541653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.542089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.542119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.542679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.542693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.543061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.543091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.543575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.543605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.544147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.544178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.544651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.544691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.545160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.545190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.545650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.545680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.546231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.546269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.546755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.546768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.547287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.547318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.547840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.547869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.548333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.310 [2024-07-25 14:54:48.548364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.310 qpair failed and we were unable to recover it. 00:27:28.310 [2024-07-25 14:54:48.548830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.548860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.549405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.549435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.549976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.550006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.550520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.550551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.551074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.551105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.551605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.551635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.552104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.552135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.552629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.552658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.553145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.553176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.553587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.553617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.554165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.554179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.554621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.554651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.555173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.555203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.555606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.555636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.556090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.556121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.556684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.556714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.557234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.557264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.557735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.557765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.558300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.558314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.558830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.558865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.559387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.559417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.559962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.559992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.560544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.560575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.561067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.561098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.561578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.561608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.562148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.562178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.562724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.562753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.563206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.563236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.563725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.563755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.564227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.564258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.564803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.564833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.565328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.565359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.565925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.565955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.566416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.566447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.566916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.566946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.567442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.567472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.567930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.567959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.311 qpair failed and we were unable to recover it. 00:27:28.311 [2024-07-25 14:54:48.568480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.311 [2024-07-25 14:54:48.568511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.569003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.569033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.569512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.569543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.570014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.570052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.570596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.570625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.571142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.571173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.571704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.571734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.572281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.572312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.572850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.572880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.573377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.573407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.573935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.573965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.574434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.574465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.575012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.575052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.575530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.575559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.576101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.576132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.576618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.576648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.577051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.577065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.577499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.577512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.578057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.578072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.578465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.578478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.578988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.579002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.579440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.579454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.579936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.579949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.580382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.580399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.580775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.580788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.581169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.581182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.581674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.581704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.582196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.582226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.582641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.582670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.312 [2024-07-25 14:54:48.583213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.312 [2024-07-25 14:54:48.583243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.312 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.583658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.583689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.584154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.584206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.584727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.584757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.585274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.585288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.585731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.585761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.586168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.586182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.586694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.586725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.587205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.587236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.587811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.587840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.588292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.588323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.588846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.588876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-25 14:54:48.589290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-25 14:54:48.589320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.589843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.589872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.590282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.590313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.590781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.590811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.590995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.591008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.591530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.591562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.591963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.591993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.592478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.592509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.593059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.593089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.593632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.593663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.594231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.594245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.594757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.594771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.595211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.595243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.595721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.595751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.596154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.596185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.596726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.596755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.597223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.597237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.597690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.597703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.598146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.598176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.598903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.598936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.599404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.599418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.599844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.599858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.600314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.600344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.600869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.600900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.601449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.601479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.601904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.601934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.602405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.602420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.602924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.602953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.603694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.603727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.604193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.604208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.604675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.604705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.605191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.605221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.606000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.606033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.606563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.606593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.607079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.607110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.607632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.607664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.608188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.608218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-25 14:54:48.608754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-25 14:54:48.608768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.609212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.609243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.609714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.609744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.610157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.610188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.610688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.610718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.611139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.611170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.611695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.611709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.612155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.612169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.612608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.612638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.613100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.613131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.613598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.613627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.614168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.614199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.614611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.614640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.615159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.615196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.615650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.615680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.616145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.616175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.616724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.616754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.617225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.617255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.617548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.617578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.618061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.618091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.618515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.618545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.619112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.619143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.619624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.619654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.620223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.620253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.620728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.620757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.621222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.621253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.621744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.621774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.622340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.622843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.622872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.623288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.623326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.623712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.623725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.623920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.623933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.624423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.624453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.625019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.625056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.625342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.625372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.625630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.625659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.626130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-25 14:54:48.626160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-25 14:54:48.626707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.626720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.627185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.627200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.627418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.627431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.627801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.627815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.628249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.628281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.628752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.628781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.629240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.629255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.629742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.629756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.630186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.630200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.630662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.630676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.631137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.631152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.631605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.631618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.632052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.632066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.632287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.632300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.632817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.632830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.633268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.633282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.633720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.633750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.634217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.634249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.634721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.634750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.635264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.635279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.635710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.635740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.636061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.636092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.636585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.637082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.637116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.637551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.637565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.638085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.638116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.638637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.638666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.639085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.639116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.639521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.639551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.639958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.639988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.640591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.640622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.641108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.641139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.641612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.641641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.642100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.642634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.642663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.643144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.643700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.643730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-25 14:54:48.644202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-25 14:54:48.644232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.644753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.644766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.645245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.645275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.645749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.645778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.646251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.646264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.646725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.646739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.647181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.647212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.647416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.647455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.647968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.647998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.648551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.648582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.649073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.649103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.649668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.649697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.650164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.650194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.650738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.650767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.651264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.651295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.651754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.651783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.651969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.651998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.652482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.652513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.653093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.653611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.653640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.654203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.654217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.654607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.654637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.655162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.655193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.655684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.655713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.656253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.656283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.656771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.656784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.657292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.657306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.657743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.657772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.658250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.658280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.658735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.658748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.659179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.659210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.659735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.659764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.660283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.660313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.660627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.660657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.661127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.661159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.661666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.661695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.662239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.662269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.662742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.662772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-25 14:54:48.663238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-25 14:54:48.663252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.663767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.663797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.664258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.664288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.664771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.664801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.665258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.665288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.665821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.665850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.666327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.666358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.666818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.666848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.667387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.667418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.667941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.667970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.668377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.668413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.668876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.668906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.669468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.669498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.669982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.670012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.670480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.670510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.671062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.671092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.671611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.671640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.672183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.672214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.672774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.672788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.673237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.673267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.673678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.674251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.674282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.674747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.674777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.675265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.675295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.675712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.675741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.676197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.676227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.676717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.676746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.677271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.677301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.677842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.677872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.678132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-25 14:54:48.678922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-25 14:54:48.678954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.679655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.679686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.680210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.680241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.680765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.680795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.681378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.681408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.681905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.681934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.682396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.682426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.682849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.683430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.683461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.683880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.683909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.684430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.684468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.684986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.685016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.685572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.685602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.686173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.686203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.686680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.686710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.687274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.687304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.687693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.687722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.688274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.688316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.688782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.688796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.689178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.689192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.689628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.690065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.690096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.690565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.690594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.691066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.691509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.691538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.692084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.692115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.692662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.692692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.693173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.693203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.693618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.693647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.694190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.694220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.694683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.694713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.695254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.695285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.695692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.695722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.696207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.696238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.696708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.697148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.697179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.697723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.697752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.698224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-25 14:54:48.698238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-25 14:54:48.698724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.698738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.699253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.699284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.699707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.699736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.700269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.700283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.700775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.700805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.701259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.701273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.701759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.701773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.702263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.702292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.702777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.702807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.703337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.703367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.703773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.703807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.704278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.704309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.704784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.704813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.705267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.705297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.705842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.706288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.706302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.706733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.707235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.707266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.707723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.707753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.708295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.708326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.708801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.708830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.709358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.709388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.709864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.709893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.710363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.710394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.710882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.710912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.711447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.711477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.711953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.711982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.712525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.712556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.713024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.713063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.713605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.713635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.713839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.713869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.714413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.714443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.714912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.714926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.715396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.715427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.715948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.715977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.716515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.716546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.716971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.717001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-25 14:54:48.717260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-25 14:54:48.717295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.717703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.717732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.718263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.718294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.718838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.718867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.719389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.719419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.719939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.719969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.720424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.720455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.720921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.720950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.721441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.721472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.721992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.722021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.722510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.722540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.723109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.723140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.723628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.723657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.724077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.724108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.724631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.724661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.725009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.725039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.725456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.725485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.725958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.725987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.726456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.726492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.727010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.727040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.727590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.727620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.728140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.728171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.728603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.728616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.728994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.729007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.729457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.729471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.729933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.729962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.730414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.730445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.730985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.730998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.731507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.731521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.731898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.731911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.732292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.732306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.732669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.732683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.733214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.733228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.733712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.733726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.733943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.733957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.734400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.734414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.734898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.734912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.735283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.735297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-25 14:54:48.735724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-25 14:54:48.735738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.736185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.736216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.736779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.736809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.737336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.737372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.737904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.737933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.738341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.738371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.738887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.738901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.739431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.739445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.739930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.739944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.740449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.740463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.740822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.740835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.741213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.741227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.741761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.741790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.742256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.742295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.742781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.742795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.743309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.743323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.743810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.743840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.744360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.744374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.744822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.744836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.745277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.745291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.745545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.745559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.745984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.746014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.746428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.746442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.746930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.746944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.747375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.747389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.747819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.747833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.748325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.748339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.748765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.748778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.749206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.749220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.749712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.749741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.750208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.750244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.750692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.750706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.751212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.751226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.751617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.752121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.752152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.752668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.752681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.753193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.753207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-25 14:54:48.753609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-25 14:54:48.753638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.754195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.754209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.754716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.754730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.755167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.755181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.755634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.755664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.756135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.756165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.756699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.756729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.757187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.757201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.757641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.757670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.758207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.758221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.758719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.758732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.759219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.759232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.759718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.759731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.760189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.760203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.760697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.760710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.761170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.761184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.761621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.761635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.762070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.762084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.762547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.763002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.763015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.763540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.763554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.763843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.763856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.764367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.764382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.764811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.764840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.765100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.765130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.765669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.765682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.766116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.766129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.766505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.766518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.766972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.766986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.767470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.767483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.767991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.768004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.768433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-25 14:54:48.768446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-25 14:54:48.768911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.768941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.769429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.769466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.769962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.769978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.770425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.770439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.770877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.770906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.771457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.771471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.771924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.771937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.772397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.772412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.772790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.772803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.773288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.773301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.773823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.773837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.774267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.774281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.774818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.774847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.775395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.775424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.775967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.775997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.776478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.776508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.776988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.777018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.777517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.777548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.778009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.778038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.778592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.778622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.779166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.779196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.779713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.779742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.780263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.780294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.780838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.780868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.781387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.781417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.781874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.781904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.782394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.782425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.782967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.782996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.783442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.783472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.783975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.784006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.784429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.784460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.785008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.785037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.785506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.786078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.786108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.786566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.786595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.786996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.787026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.787513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.787543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.788088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-25 14:54:48.788118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-25 14:54:48.788639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.788668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.789189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.789220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.789702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.789731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.790273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.790304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.790830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.790859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.791408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.791439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.791977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.792006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.792598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.792632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.793159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.793190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.793732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.793763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.794256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.794287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.794758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.794788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.795282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.795313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.795777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.795790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.796283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.796314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.796860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.796890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.797439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.797470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.798041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.798080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.798622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.798652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.799176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.799208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.799743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.799772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.800293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.800323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.800794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.800824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.801379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.801410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.801881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.801910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.802396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.802426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.802970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.802999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.803504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.804032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.804081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.804650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.804679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.805222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.805253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.805656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.805685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.806171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.806207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.806749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.806778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.807299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.807330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.807872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.807901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.808448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.808479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.808892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-25 14:54:48.808921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-25 14:54:48.809399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.809430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.809882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.809911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.810371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.810401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.810817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.810846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.811318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.811349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.811836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.811864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.812354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.812394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.812831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.812860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.813330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.813361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.813879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.813892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.814335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.814349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.814871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.814884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.815310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.815341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.815810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.815839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.816363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.816938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.816951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.817331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.817345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.817722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.817736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.818180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.818194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.818701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.818715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.819160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.819190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.819705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.819734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.820289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.820320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.820815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.820845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.821365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.821395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.821865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.821894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.822296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.822326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.822895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.822924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.823476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.823506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.823958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.823987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.824544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.824575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.825112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.825144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.825683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.825712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.826166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.826197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.826629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.826659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.827182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.827218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.827763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.827792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-25 14:54:48.828340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-25 14:54:48.828370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.828897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.828926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.829349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.829379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.829884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.829898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.830416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.830447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.830990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.831019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.831549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.832134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.832148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.832589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.832602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.833062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.833076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.833502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.833531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.834076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.834642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.834672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.835224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.835254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.835800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.835829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.836246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.836276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.836530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.836559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.837105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.837135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.837599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.837629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.838173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.838203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.838694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.838723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.839240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.839747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.839776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.840258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.840288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.840752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.840781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.841267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.841303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.841824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.841854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.842388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.842418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.842885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.842914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.843397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.843427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.843967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.843996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.844587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.844619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-25 14:54:48.845036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-25 14:54:48.845074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.845543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.845573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.846051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.846081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.846568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.846597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.847116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.847146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.847665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.847695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.848156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.848170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.848673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.848702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.849203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.849579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.849608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.850077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.850107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.850579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.850608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.851081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.851112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.851634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.851663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.852143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.852174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.852646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.852676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.853141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.853172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.853626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.853656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.854204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.854234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.854754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.854784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.855269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.855300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.856641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.856671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.857200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.857217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.857798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.857830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.858804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.858828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.859273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.859288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.859498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.859511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.860020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.860034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.860424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.860437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.860926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.860940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.861364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.861378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.861841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.861855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.862288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.862301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.862795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.862809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.863262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.863280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.863669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.863683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.864124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.864139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.864651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-25 14:54:48.864665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-25 14:54:48.864887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-25 14:54:48.864900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.865417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.865433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.865810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.865824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.866331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.866345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.866763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.866777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.867165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.867179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.867616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.867629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.868007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.868021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.868441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.868456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.868877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.868891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.869403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.869418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.869855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.869869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.870304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.870318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.870803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.870816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.871252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.871266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.871754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.871768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.872257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.872272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.872835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.872850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.873344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.873376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.873880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.873909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.874372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.874404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.874862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.874892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.875422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.875436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.875924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.875960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.876440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.876470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.877012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.864 [2024-07-25 14:54:48.877026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.864 qpair failed and we were unable to recover it. 00:27:28.864 [2024-07-25 14:54:48.877423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.877438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.877927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.877957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.878480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.878512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.879003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.879032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.879517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.879547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.880089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.880119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.880550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.880580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.881066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.881096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.881634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.881664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.882145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.882175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.882662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.882692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.883169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.883748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.883777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.884299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.884330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.884878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.884908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.885115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.885146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.885599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.885629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.886100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.886132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.886620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.886649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.886905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.886935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.887349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.887380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.887921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.887950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.888416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.888447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.888875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.888904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.889372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.889403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.889881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.889911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.890452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.890466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.890939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.890969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.891522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.891552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.891964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.891994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.892475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.892505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.893058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.893089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.893611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.893640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.894105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.894137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.894669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.894700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.895182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.895213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.865 qpair failed and we were unable to recover it. 00:27:28.865 [2024-07-25 14:54:48.895710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.865 [2024-07-25 14:54:48.895740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.896310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.896355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.896807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.896843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.897362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.897392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.897659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.897689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.898240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.898270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.898678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.898708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.899429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.899462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.900025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.900064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.900582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.900612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.901158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.901188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.901596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.901626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.902160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.902191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.902758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.902788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.903208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.903238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.903792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.903822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.904394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.904424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.904895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.904924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.905465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.905496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.905899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.905928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.906448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.906479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.907017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.907055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.907603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.907632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.908160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.908190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.908660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.908690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.909216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.909246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.909766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.909795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.910201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.910232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.910795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.910824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.911338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.911355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.911788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.911802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.912292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.912323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.912805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.912835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.913331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.913345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.913775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.913789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.914217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.914231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.914655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.914684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.915145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.866 [2024-07-25 14:54:48.915175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.866 qpair failed and we were unable to recover it. 00:27:28.866 [2024-07-25 14:54:48.915737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.915767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.916312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.916341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.916836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.916866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.917321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.917352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.917822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.917852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.918132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.918164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.918707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.918736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.919223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.919254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.919773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.919802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.920270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.920300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.920859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.920889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.921462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.921492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.921975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.922005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.922544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.922575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.923077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.923109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.923363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.923393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.923934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.923964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.924496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.924527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.924994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.925505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.925535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.926077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.926436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.926465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.927033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.927072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.927485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.927515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.928040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.928079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.928619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.928648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.929189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.929220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.929738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.929768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.930242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.930272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.930838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.930868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.931397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.931428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.931842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.931871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.932345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.932380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.932925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.932954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.933452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.933483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.934005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.867 [2024-07-25 14:54:48.934034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.867 qpair failed and we were unable to recover it. 00:27:28.867 [2024-07-25 14:54:48.934525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.934555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.935033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.935073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.935490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.935520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.935981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.936011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.936482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.936512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.936904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.936934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.937426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.937457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.938009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.938039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.938447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.938476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.938999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.939029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.939531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.939561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.939975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.940004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.940482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.940513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.940976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.941005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.941502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.941532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.941732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.941761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.942280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.942310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.942854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.942884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.943403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.943434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.943978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.944009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.944559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.944589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.945192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.945222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.945693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.945722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.946265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.946295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.946771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.946801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.947347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.947378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.947835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.947848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.948290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.948321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.948839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.948876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.949334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.949364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.949832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.949870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.950311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.950325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.950780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.950810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.951295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.951325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.951809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.951839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.952369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.952383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.868 qpair failed and we were unable to recover it. 00:27:28.868 [2024-07-25 14:54:48.952817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.868 [2024-07-25 14:54:48.952831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.953342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.953356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.953754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.953783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.954187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.954218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.954704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.954733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.955188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.955219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.955763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.955793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.956305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.956335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.956857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.956887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.957410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.957439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.957694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.957723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.958243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.958274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.958818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.958847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.959396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.959426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.959898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.959928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.960475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.960506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.960974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.961003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.961561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.961592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.962079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.962110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.962568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.962597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.963071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.963101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.963509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.963539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.964004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.964032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.964512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.964543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.964963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.964992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.965476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.965490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.965928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.965958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.966446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.966486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.966922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.966939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.967457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.967488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.967907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.967936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.968408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.869 [2024-07-25 14:54:48.968438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.869 qpair failed and we were unable to recover it. 00:27:28.869 [2024-07-25 14:54:48.968978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.969008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.969604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.969634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.970024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.970063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.970550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.970580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.971031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.971071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.971489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.971519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.972084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.972115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.972598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.972628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.973173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.973204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.973622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.973651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.974175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.974206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.974752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.974782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.975271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.975301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.975767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.975796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.976342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.976372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.976918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.976948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.977431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.977461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.978026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.978074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.978624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.978653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.979120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.979134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.979658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.979671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.980118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.980132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.980620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.980634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.980883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.980896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.981322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.981336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.981821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.981835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.982253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.982267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.982792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.982822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.983355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.983369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.983809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.983822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.984274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.984316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.984884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.984914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.985449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.985463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.985985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.985998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.986524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.986554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.987026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.987065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.987645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.987659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.870 qpair failed and we were unable to recover it. 00:27:28.870 [2024-07-25 14:54:48.988089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.870 [2024-07-25 14:54:48.988104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.988486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.988984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.988997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.989504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.989517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.989968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.989981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.990379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.990410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.990958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.990987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.991502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.991516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.991933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.991946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.992385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.992416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.992959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.992989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.993468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.993482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.993835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.993849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.994332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.994346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.994781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.994794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.995224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.995238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.995669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.995698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.996215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.996245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.996734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.996763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.997260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.997274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.997787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.997816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.998281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.998295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.998739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.998753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.999052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.999066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:48.999582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:48.999595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.000033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.000080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.000583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.000600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.001030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.001052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.001417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.001431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.001812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.001825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.002260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.002274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.002669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.002683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.003193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.003910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.003924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.004383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.004397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.004814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.004828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.005265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.005279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.005734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.871 [2024-07-25 14:54:49.005748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.871 qpair failed and we were unable to recover it. 00:27:28.871 [2024-07-25 14:54:49.006186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.006200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.006638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.006652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.007158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.007172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.007346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.007359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.007729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.007743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.008249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.008262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.008510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.008524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.008971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.008985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.009517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.010022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.010035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.010551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.010565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.011102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.011116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.011492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.011505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.011956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.011986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.012456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.012470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.012835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.012849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.013378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.013392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.013880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.013894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.014404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.014418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.014932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.014946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.015472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.015504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.016055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.016086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.016547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.016560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.017086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.017100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.017552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.017566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.018060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.018075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.018473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.018487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.018912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.018925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.019427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.019861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.019891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.020406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.020424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.020916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.020929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.021417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.021430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.021941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.021955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.022375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.022389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.022764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.022778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.023216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.023230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.872 [2024-07-25 14:54:49.023736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.872 [2024-07-25 14:54:49.023750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.872 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.024181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.024195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.024569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.025038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.025076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.025569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.025599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.026079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.026093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.026522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.026535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.027058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.027089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.027556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.028104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.028135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.028606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.028636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.029169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.029199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.029610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.029640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.030093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.030124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.030593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.030623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.031164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.031664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.031694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.031945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.031974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.032469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.032500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.033018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.033054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.033600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.033634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.034174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.034205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.034771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.034800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.035262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.035293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.035766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.035796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.036342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.036895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.036925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.037450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.037957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.037987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.038541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.038571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.039090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.039120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.039585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.039615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.873 [2024-07-25 14:54:49.040150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.873 [2024-07-25 14:54:49.040182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.873 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.040726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.040756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.041299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.041313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.041747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.041776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.042320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.042350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.042897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.042926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.043413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.043443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.043895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.044423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.044454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.044939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.044969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.045534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.045564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.045985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.046023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.046445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.046460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.046965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.046978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.047496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.047527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.048077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.048108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.048584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.048614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.049154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.049185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.049726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.049756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.050288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.050302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.050841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.050871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.051336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.051367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.051859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.051890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.052409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.052440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.052985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.053014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.053510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.053541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.054034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.054074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.054550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.054580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.055147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.055178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.055744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.055779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.056264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.056294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.056842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.056872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.057368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.057399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.057869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.057898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.058442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.058472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.058972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.059001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.059531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.059561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.874 [2024-07-25 14:54:49.060037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.874 [2024-07-25 14:54:49.060077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.874 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.060560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.061059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.061073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.061603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.061633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.062071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.062102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.062569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.062599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.063137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.063167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.063627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.063656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.064126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.064157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.064702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.064732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.065148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.065179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.065652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.065682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.066028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.066065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.066608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.067186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.067216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.067705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.067736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.068258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.068289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.068762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.068792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.069320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.069350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.069869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.069904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.070371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.070401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.070932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.070962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.071435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.071466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.071951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.071980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.072433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.072464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.072985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.073015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.073507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.074011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.074041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.074544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.074573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.075057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.075088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.075584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.075613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.076099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.076131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.076613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.076643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.077209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.077240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.077792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.077822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.078143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.078174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.078721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.078750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.079265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.875 [2024-07-25 14:54:49.079279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.875 qpair failed and we were unable to recover it. 00:27:28.875 [2024-07-25 14:54:49.079766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.079780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.080197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.080212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.080589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.080618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.081160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.081190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.081595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.081624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.082096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.082606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.082635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.083097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.083127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.083584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.083614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.083896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.083926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.084413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.084427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.084892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.084922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.085470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.085500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.085967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.085997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.086470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.086501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.086961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.086990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.087483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.087514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.088062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.088093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.088617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.088646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.089186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.089217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.089674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.089704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.090246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.090276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.090767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.090783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.091222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.091253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.091726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.091756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.092224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.092255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.092437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.092450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.092880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.092893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.093345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.093359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.093855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.093868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.094359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.094389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.094945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.094974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.095541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.095572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.096083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.096118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.096555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.096585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.097130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.097161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.097640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.097670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.098217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.876 [2024-07-25 14:54:49.098247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.876 qpair failed and we were unable to recover it. 00:27:28.876 [2024-07-25 14:54:49.098715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.098745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.098923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.098937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.099363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.099393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.099940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.099970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.100488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.100518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.100988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.101017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.101287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.101317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.101862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.101892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.102098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.102128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.102593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.103085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.103115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.103487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.103521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.104075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.104105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.104663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.104693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.105189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.105202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.105708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.105738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.106258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.106289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.106760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.106789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.107259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.107289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.107813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.107843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.108321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.108351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.108837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.108867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.109363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.109393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.109933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.109963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.110374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.110404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.110948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.110962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.111465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.111496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.112014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.112056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.112604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.112633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.113036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.113075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.113357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.113387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.113932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.113961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.114428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.114442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.114876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.114890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.115403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.115433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.115907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.115936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.116477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.116507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.117055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.877 [2024-07-25 14:54:49.117085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.877 qpair failed and we were unable to recover it. 00:27:28.877 [2024-07-25 14:54:49.117612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.117641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.118118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.118149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.118684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.118698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.119211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.119242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.119742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.119772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.120297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.120327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.120789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.120819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.121284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.121314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.121770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.121800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.122277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.122852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.122882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.123402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.123432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.123900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.123929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.124422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.124455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.124932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.124971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.125463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.125494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.126034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.126074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.126640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.127218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.127250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.127714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.127743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.128204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.128234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.128696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.128726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.129187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.129217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.129686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.129715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.130235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.130266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.130720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.130749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.131286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.131300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.131810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.131823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.132330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.132361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.132838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.132866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.133336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.133368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.133841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.878 [2024-07-25 14:54:49.133870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.878 qpair failed and we were unable to recover it. 00:27:28.878 [2024-07-25 14:54:49.134410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.134440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.134959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.134989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.135544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.135575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.136033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.136074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.136592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.136621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.137086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.137117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.137666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.137695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.138147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.138177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.138650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.138680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.139174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.139205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.139680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.139709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.140256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.140286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.140778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.140807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.141371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.141402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.141907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.141937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.142430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.142460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.143019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.143032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.143493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.143523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.143962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.143991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.144542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.144573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.145065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.145095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.145625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.145639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.146075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.146106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.146511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.146542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.147000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.147013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.147500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.147514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.148002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.879 [2024-07-25 14:54:49.148016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:28.879 qpair failed and we were unable to recover it. 00:27:28.879 [2024-07-25 14:54:49.148716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.148791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.149256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.149297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.149784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.149815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.150280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.150311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.150810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.150839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.151299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.151330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.151801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.151831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.152350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.152380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.152882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.152912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.153399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.153991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.154021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.154552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.154583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.155125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.155156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.155617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.155647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.156172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.156203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.156676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.156706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.157178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.157209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.157752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.157780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.158245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.158275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.158741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.158771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.159321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.159351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.159603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.159641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.150 [2024-07-25 14:54:49.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.150 qpair failed and we were unable to recover it. 00:27:29.150 [2024-07-25 14:54:49.160581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.160616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.161138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.161169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.161425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.161454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.161921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.161951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.162432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.162462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.162928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.162942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.163458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.163489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.163955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.163985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.164438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.164469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.164989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.165002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.165493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.165507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.165957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.165987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.166400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.166430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.166959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.166973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.167433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.167464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.167937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.167966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.168483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.168513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.168920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.168949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.169231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.169262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.169808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.169838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.170333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.170363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.170909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.170938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.171437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.171451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.171891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.171904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.172353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.172384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.172860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.172889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.173441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.173455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.173987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.174001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.174454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.174468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.174763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.174792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.175312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.175342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.175806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.175836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.176393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.176423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.176882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.176912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.177413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.177444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.177945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.177975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.178520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.178550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.179081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.179112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.179560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.179589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.180155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.180185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.180652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.180682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.181203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.181240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.181725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.181754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.182252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.182283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.182698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.182727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.183180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.183210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.183678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.183708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.183963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.183993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.184499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.184530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.184932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.184961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.185481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.185512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.185986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.186000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.186395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.186409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.186844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.186857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.187292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.187323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.187736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.187765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.188307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.188338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.188862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.188891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.189382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.189414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.189835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.189865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.190275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.190305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.190825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.190855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.191397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.191428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.191850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.191879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.192398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.192428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.192907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.192936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.193340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.193371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.193833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.193863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.194120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.194136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.194524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.194537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.194996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.195026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.195581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.151 [2024-07-25 14:54:49.195612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.151 qpair failed and we were unable to recover it. 00:27:29.151 [2024-07-25 14:54:49.196082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.196112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.196576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.196606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.197090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.197121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.197669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.197697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.198091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.198121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.198573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.198603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.199155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.199185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.199704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.199733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.200207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.200238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.200712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.200741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.201296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.201310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.201731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.201760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.202300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.202331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.202848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.203374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.203404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.203871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.203900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.204388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.204419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.204963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.204992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.205524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.205555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.206093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.206124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.206646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.206676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.207228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.207259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.207759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.207790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.208188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.208223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.208649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.208678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.208955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.208984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.209507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.210022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.210064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.210611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.210641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.211161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.211193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.211661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.211691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.212245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.212276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.212819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.212849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.213174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.213204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.213607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.213637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.214367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.214400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.214890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.214920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.215456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.215491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.215967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.215997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.216485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.216515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.217007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.217036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.217236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.217266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.217690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.217719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.218204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.218236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.218692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.218722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.219244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.219274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.219838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.219868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.220333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.220364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.220775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.220804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.221280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.221312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.221777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.221806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.222209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.222240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.222753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.222782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.223261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.223292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.223778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.223807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.224290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.224321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.224707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.224721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.225117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.225132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.225571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.225600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.226129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.226159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.226580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.226609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.227076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.227107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.227517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.227995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.228010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.228472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.228492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.229004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.229017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.229548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.229562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.229932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.229961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.230379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.230393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.230758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.230772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.231211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.231226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.231661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.152 [2024-07-25 14:54:49.231698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.152 qpair failed and we were unable to recover it. 00:27:29.152 [2024-07-25 14:54:49.232167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.232198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.232475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.232489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.232867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.232881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.233039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.233056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.233568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.233597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.234007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.234036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.234461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.234496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.234958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.234971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.235396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.235410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.235792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.235821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.236282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.236312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.236814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.236844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.237287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.237301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.237737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.237751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.238197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.238212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.238597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.238610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.239067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.239081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.239521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.239535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.239914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.239927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.240303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.240317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.240808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.240821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.241246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.241273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.241755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.241784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.242262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.242276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.242647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.242661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.243191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.243205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.243574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.243587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.244074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.244088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.244519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.244533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.244969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.244983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.245438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.245452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.245806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.245819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.246267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.246280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.246710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.246726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.247096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.247109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.247595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.247609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.247990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.248003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.248393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.248423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.248884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.248914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.249396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.249410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.249848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.249861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.250251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.250264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.250702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.250715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.251133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.251147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.251583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.251597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.251969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.251983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.252144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.252157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.252584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.252613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.253029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.253068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.253524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.253554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.253987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.254000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.254442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.254472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.254956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.254970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.255432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.255445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.255818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.255832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.256275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.256305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.256823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.256853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.257272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.257302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.257766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.257795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.258253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.258283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.258754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.258789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.259245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.259276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.259732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.259761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.260231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.153 [2024-07-25 14:54:49.260245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.153 qpair failed and we were unable to recover it. 00:27:29.153 [2024-07-25 14:54:49.260616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.260630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.261059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.261074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.261499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.261512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.261885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.261914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.262371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.262402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.262841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.262855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.263319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.263333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.263719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.263733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.264210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.264224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.264461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.264911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.264924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.265645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.265660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.266108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.266138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.266640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.266654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.267184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.267198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.267663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.267676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.268207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.268221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.268667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.268680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.269196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.269227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.269725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.269739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.270177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.270191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.270631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.270644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.271157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.271188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.271822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.271835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.272373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.272405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.272835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.272874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.273411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.273426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.273673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.273686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.274177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.274191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.274633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.274646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.275117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.275131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.275524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.275537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.275984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.276009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.276437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.276451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.276886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.276899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.277417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.277431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.277819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.277832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.278229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.278245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.278717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.278730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.279261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.279275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.279719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.279733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.280247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.280261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.280699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.280712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.281208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.281222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.281638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.281651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.282186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.282201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.282639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.282652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.283120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.283134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.283562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.283575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.284110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.284124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.284590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.284604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.285176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.285190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.285681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.285694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.286215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.286245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.286782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.286811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.287287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.287302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.287768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.287797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.288338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.288368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.288893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.288922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.289390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.289420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.290083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.290100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.290475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.290505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.291062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.291093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.291592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.291622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.292102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.292134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.292622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.292652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.293203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.293233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.293709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.294231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.294245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.294768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.294781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.295305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.154 [2024-07-25 14:54:49.295320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.154 qpair failed and we were unable to recover it. 00:27:29.154 [2024-07-25 14:54:49.295846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.295860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.296353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.296384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.296903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.296932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.297479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.297510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.297976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.298006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.298498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.298530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.299062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.299093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.299524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.299556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.300107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.300139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.300616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.300645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.301158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.301189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.301665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.301694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.302167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.302198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.302675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.302704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.303136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.303167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.303643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.303672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.304229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.304260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.304770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.304800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.305349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.305381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.305810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.305840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.306368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.306825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.306855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.307328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.307358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.307896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.307926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.308398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.308430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.308902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.309426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.309440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.309835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.309865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.310272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.310303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.310797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.310827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.311359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.311390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.311868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.311898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.312367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.312398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.312932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.312962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.313433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.313470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.313900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.313930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.314479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.314510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.315120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.315151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.315624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.315653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.316135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.316167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.316643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.316673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.317158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.317188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.317686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.317715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.318323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.318354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.318792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.318806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.319327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.319341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.319732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.319761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.320292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.320323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.320792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.320823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.321355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.321386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.321820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.321849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.322380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.322411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.323010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.323039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.323550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.323580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.324141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.324172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.324789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.324818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.325352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.325366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.325822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.325851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.326317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.326331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.326726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.326756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.327307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.327337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.327883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.327912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.328423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.328454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.328953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.328984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.329450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.329987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.155 [2024-07-25 14:54:49.330017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.155 qpair failed and we were unable to recover it. 00:27:29.155 [2024-07-25 14:54:49.330604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.330634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.331192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.331222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.331740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.331769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.332281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.332313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.332746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.332775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.333314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.333345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.333866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.333895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.334315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.334346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.334823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.334852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.335269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.335283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.335728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.335757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.336256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.336287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.336829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.336858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.337411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.337442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.338004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.338034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.338544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.338574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.339140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.339172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.339729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.339758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.340326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.340358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.340858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.340887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.341592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.341626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.342161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.342192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.342667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.342697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.343183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.343213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.343691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.343720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.344422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.344454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.344949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.344979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.345508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.345539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.346037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.346077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.346508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.346538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.347105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.347136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.347672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.347701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.348126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.348156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.348707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.348736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.349316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.349347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.349840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.349870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.350339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.350355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.350819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.350833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.351339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.351354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.351836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.351865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.352431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.352461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.352990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.353020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.353527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.354077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.354108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.354613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.354643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.355184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.355215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.355640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.355670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.356182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.356196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.356690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.356719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.357193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.357207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.357705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.357735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.358277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.358308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.358754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.358783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.359311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.359342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.359762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.359791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.360304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.360335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.360764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.361318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.361332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.361708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.361738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.362222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.362253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.362779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.362814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.363243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.363274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.363826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.363856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.364454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.364485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.365055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.365086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.365594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.365623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.366102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.366132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.367087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.367119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.367868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.367899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.156 [2024-07-25 14:54:49.368465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.156 [2024-07-25 14:54:49.368480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.156 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.369002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.369032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.369526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.369555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.370137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.370151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.370544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.370557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.370943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.370973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.371481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.371512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.371936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.371971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.372466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.372497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.372938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.372967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.373504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.373535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.374150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.374179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.374684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.374714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.375197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.375229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.375657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.375670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.376135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.376166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.376713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.376742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.377219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.377250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.377775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.377804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.378371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.378402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.378824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.378853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.379425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.379456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.379887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.379916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.380332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.380363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.380794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.380824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.381327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.381358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.381836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.381865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.382300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.382314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.382767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.382797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.383321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.383351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.383830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.383859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.384385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.384416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.384841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.384871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.385397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.385428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.386007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.386041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.386527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.386558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.387069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.387100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.387534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.387563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.388091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.388123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.388631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.388661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.389147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.389178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.389611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.389641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.390122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.390154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.390642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.390671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.391358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.391388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.391976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.392006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.392579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.392611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.393230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.393260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.393692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.393722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.394199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.394229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.394651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.394680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.395246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.395276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.395704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.395733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.396250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.396281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.396709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.396738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.397225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.397257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.397738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.397767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.398324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.398369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.398874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.398904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.399468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.399499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.400029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.400082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.400510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.400540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.401112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.401144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.401578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.401607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.402180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.402211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.402641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.402670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.403201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.403232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.403710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.404285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.404317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.404807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.404837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.405379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.405410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.405981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.157 [2024-07-25 14:54:49.406012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.157 qpair failed and we were unable to recover it. 00:27:29.157 [2024-07-25 14:54:49.406599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.406630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.407127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.407158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.407673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.407703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.408266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.408286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.408740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.408766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.409336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.409351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.409755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.409785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.410341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.410373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.410923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.410953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.411487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.411520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.412094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.412136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.412614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.412645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.413111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.413143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.413597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.413626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.414123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.414154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.414589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.414620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.415200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.415231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.415722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.415753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.416305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.416322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.416736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.416750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.417190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.417204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.417649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.417679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.418176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.418207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.418672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.418702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.419319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.419349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.419902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.419933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.420442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.420473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.421102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.421133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.421641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.421671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.422258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.422289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.422770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.422806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.423359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.423374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.423879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.423910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.424420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.424450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.424935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.424965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.425487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.425527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.426003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.426033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.426530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.426561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.427113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.427144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.427577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.427608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.428150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.428165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.428657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.428671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.429216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.429247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.429690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.429719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.430229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.430244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.430693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.430707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.431203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.431218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.431669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.431683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.432235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.432267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.432753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.432783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.433242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.433274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.433762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.433792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.434285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.434316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.435056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.435071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.435612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.435627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.436088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.436119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.158 [2024-07-25 14:54:49.436607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.158 [2024-07-25 14:54:49.436638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.158 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.437180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.437214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.437754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.437785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.438341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.438372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.438908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.438938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.439576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.439609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.440230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.440263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.425 qpair failed and we were unable to recover it. 00:27:29.425 [2024-07-25 14:54:49.440699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.425 [2024-07-25 14:54:49.440728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.441292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.441909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.441939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.442438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.442469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.442956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.442986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.443422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.443453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.443942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.443971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.444531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.444562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.445116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.445154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.445661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.446208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.446240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.446725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.446755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.447330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.447362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.447921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.447951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.448492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.448523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.449037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.449088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.449657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.449687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.450251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.450849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.450879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.451411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.451426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.451926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.451940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.452459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.452489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.453057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.453085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.453618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.453633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.454161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.454176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.454635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.454649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.455221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.455252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.455780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.455811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.456332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.456347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.456877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.456907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.457391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.457422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.457990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.458019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.458544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.459166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.459200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.459711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.459741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.460308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.460344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.460931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.460961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.461524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.426 [2024-07-25 14:54:49.461556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.426 qpair failed and we were unable to recover it. 00:27:29.426 [2024-07-25 14:54:49.462317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.462351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.462865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.462895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.463433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.463466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.463959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.463989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.464489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.464520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.465058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.465090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.465671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.465702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.466229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.466261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.466799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.466814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.467378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.467394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.467805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.467836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.468401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.468433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.468871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.468902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.469476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.469507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.470120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.470152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.470716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.470746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.471280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.471311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.471795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.471824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.472388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.472419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.472982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.473011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.473459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.473491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.473981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.474012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.474523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.474554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.475121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.475153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.475635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.476230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.476261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.476751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.476781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.477326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.477357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.477829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.477843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.478304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.478320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.478778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.478793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.479331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.479346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.479803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.479817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.480277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.480292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.480746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.480761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.481216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.481231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.481733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.481747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.482208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.427 [2024-07-25 14:54:49.482223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.427 qpair failed and we were unable to recover it. 00:27:29.427 [2024-07-25 14:54:49.482763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.482780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.483310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.484111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.484147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.484797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.484826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.485386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.485418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.485827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.485841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.486332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.486347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.486799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.486813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.487313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.487328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.487781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.487795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.488319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.488335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.488890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.488904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.489388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.489402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.489929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.489944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.490441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.490987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.491001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.491513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.491528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.491938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.492461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.492476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.493030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.493052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.493500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.493514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.494261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.494277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.494724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.494739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.495276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.495292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.495764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.495778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.496576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.496593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.497137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.497168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.497671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.497702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.498216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.498247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.498743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.498757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.499241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.499256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.499661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.499675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.500199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.500215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.500723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.500738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.501200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.501215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.501717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.501731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.502257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.502272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.502798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.428 [2024-07-25 14:54:49.502812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.428 qpair failed and we were unable to recover it. 00:27:29.428 [2024-07-25 14:54:49.503327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.503343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.503751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.503766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.504220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.504235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.504634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.504648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.505233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.505249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.505799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.505814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.506345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.506360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.506741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.506754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.507279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.507294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.507775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.507790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.508325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.508340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.508863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.508877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.509385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.509400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.509858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.509873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.510362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.510393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.510902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.510932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.511506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.511520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.511976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.511991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.512479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.512880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.512894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.513383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.513397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.513864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.513878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.514412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.514444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.515212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.515247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.515755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.515769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.516227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.516242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.516693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.516708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.517188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.517203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.517732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.517746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.518308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.518324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.519006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.519023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.519513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.519528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.519937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.520430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.520461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.520880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.520910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.521426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.521458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.521955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.521985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.522464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.429 [2024-07-25 14:54:49.522495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.429 qpair failed and we were unable to recover it. 00:27:29.429 [2024-07-25 14:54:49.522991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.523022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.523464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.523972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.523987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.524467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.524498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.525026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.525066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.525553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.525583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.526110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.526141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.526681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.526711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.527310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.527341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.527841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.527871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.528470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.528502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.529104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.529139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.529631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.529661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.530227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.530268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.530676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.530705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.531254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.531286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.531821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.531851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.532451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.532482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.533032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.533075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.533519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.533549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.534092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.534558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.534589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.535151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.535182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.535602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.535631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.536068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.536100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.536586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.536616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.537123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.537155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.537588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.537618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.538162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.538193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.538748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.538777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.539377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.430 [2024-07-25 14:54:49.539409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.430 qpair failed and we were unable to recover it. 00:27:29.430 [2024-07-25 14:54:49.539896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.540415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.540446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.540881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.540916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.541483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.541515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.542156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.542186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.542706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.542721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.543253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.543269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.543717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.543746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.544232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.544274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.544806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.544835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.545325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.545358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.545893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.545923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.546490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.546521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.547096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.547127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.547686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.547716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.548274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.548304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.548846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.548876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.549455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.549486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.550093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.550124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.550677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.550692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.551198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.551213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.551602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.551631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.552123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.552156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.552600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.552629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.553197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.553238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.553696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.553727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.554240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.554271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.554815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.554844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.555404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.555434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.555952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.555988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.556719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.556754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.557312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.557344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.557828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.557858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.558347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.558378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.558864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.558893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.559471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.559503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.560030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.560079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.560570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.560600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-07-25 14:54:49.561090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.431 [2024-07-25 14:54:49.561123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.561619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.561650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.562150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.562182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.562683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.562713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.563273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.563303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.563786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.563801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.564309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.564340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.564852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.565396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.565427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.566000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.566030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.566601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.566631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.567191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.567222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.567742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.567772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.568248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.568279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.568766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.568796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.569355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.569386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.569902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.569932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.570679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.570713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.571284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.571317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.571765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.571795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.572304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.572335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.572825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.572856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.573404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.573435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.573924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.573954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.574516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.574547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.574994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.575024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.575570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.575600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.576123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.576156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.576693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.576724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.577314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.577346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.577860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.577890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-07-25 14:54:49.578479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.432 [2024-07-25 14:54:49.578510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.579006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.579041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.579580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.579610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.580182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.580212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.580711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.580741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.581303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.581334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.581774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.581804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.582319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.582334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.582864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.582895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.583474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.583505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.583951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.583980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.584529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.584562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.585231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.585263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.585816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.585846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.586413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.586444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.587038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.587081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.587564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.587579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.588081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.588097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.588587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.588618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.589183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.589215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.589738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.589768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.590253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.590285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.590844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.590875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.591479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.591511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.592039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.592080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.592568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.592598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.593186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.593219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.593807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.593836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.594422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.594459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.594949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.594963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.595493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.595524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.596121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.596153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.596606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.596635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.597185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.597217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.597717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.597747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.598344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.598376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.598881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.598911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.599435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.599467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.433 [2024-07-25 14:54:49.599908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.433 [2024-07-25 14:54:49.599937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.433 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.600457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.600489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.601054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.601069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.601578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.601593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.602161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.602193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.602691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.602720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.603255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.603286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.603772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.603803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.604357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.604936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.604965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.605631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.605663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.606156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.606188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.606619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.606648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.607213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.607245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.607853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.607883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.608307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.608339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.608829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.608859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.609358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.609397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.609855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.609870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.610402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.610433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.610988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.611018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.611584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.611615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.612124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.612155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.612640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.612671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.613226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.613258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.613752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.613782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.614270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.614302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.614859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.614889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.615473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.616025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.616071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.616563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.616594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.617113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.617150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.617651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.618184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.618200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.618668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.618697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.619248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.619280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.619844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.619873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.620419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.620451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.620941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.434 [2024-07-25 14:54:49.620971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.434 qpair failed and we were unable to recover it. 00:27:29.434 [2024-07-25 14:54:49.621458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.621489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.622067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.622099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.622535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.622565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.623169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.623201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.623698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.623712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.624225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.624244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.624646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.624677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.625237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.625268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.625837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.625867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.626378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.626410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.626925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.626955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.627473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.628055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.628086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.628587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.628618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.629167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.629200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.629644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.630152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.630183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.630660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.630690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.631282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.631314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.631800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.631818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.632280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.632311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.632822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.632852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.633411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.633442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.633930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.633959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.634434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.634466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.635020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.635065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.635568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.635597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.636141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.636173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.636615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.636645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.637231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.637265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.637744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.638279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.638293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.638688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.638703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.639154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.639169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.639642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.639657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.640205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.640220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.640717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.641273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.435 [2024-07-25 14:54:49.641287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.435 qpair failed and we were unable to recover it. 00:27:29.435 [2024-07-25 14:54:49.641757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.641771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.642316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.642331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.642848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.642862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.643331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.643346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.643895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.643910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.644485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.644500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.645003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.645017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.645487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.645502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.645972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.645986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.646513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.646545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.647118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.647150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.647686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.647716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.648327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.648358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.648797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.648810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.649286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.649301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.649822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.649853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.650465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.650496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.651000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.651030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.651608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.651639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.652181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.652213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.652711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.652741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.653329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.653361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.653899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.653935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.654545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.654576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.655025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.655040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.655595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.655626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.656217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.656750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.656791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.657298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.657329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.657844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.657875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.658471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.658502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.659008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.659022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.659568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.436 [2024-07-25 14:54:49.659599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.436 qpair failed and we were unable to recover it. 00:27:29.436 [2024-07-25 14:54:49.660081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.660112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.660650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.660681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.661291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.661322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.661888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.661919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.662458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.662489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.663073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.663104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.663604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.663634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.664122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.664154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.664716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.664747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.665279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.665825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.665855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.666409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.666440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.666951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.666982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.667553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.667584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.668155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.668187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.668694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.668724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.669287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.669319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.669763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.669794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.670334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.670365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.670799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.670813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.671342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.671373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.671861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.671890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.672375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.672406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.672970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.673000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.673572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.673602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.674176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.674207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.674734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.674764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.675340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.675371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.675957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.675988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.676565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.676596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.677109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.677141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.677633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.677669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.678224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.678255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.678826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.678856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.679452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.679483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.679975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.680005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.680572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.680602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.681097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.437 [2024-07-25 14:54:49.681129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.437 qpair failed and we were unable to recover it. 00:27:29.437 [2024-07-25 14:54:49.681707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.681737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.682336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.682367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.682974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.683004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.683589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.683623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.684228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.684259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.684828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.684859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.685457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.685473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.686026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.686041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.686538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.686568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.687079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.687111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.687650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.687681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.688162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.688194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.688752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.688782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.689332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.689363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.689924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.689955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.690535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.690569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.691091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.691123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.691614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.691645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.692203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.692233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.692772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.692808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.693367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.693399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.693969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.694517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.694549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.695101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.695133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.695630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.695660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.696163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.696195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.696686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.696716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.697280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.697312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.697798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.697828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.698351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.698382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.698903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.698933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.699478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.699494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.700039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.700088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.700668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.700698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.701249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.701281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.701830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.701844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.702383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.702414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.702952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.438 [2024-07-25 14:54:49.702983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.438 qpair failed and we were unable to recover it. 00:27:29.438 [2024-07-25 14:54:49.703421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.703453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.703930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.703969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.704435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.704450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.704977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.705007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.705530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.705562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.706121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.706692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.706722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.707219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.707250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.707807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.707821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.708346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.708379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.708966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.708996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.709523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.709556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.710132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.710164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.439 [2024-07-25 14:54:49.710666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-07-25 14:54:49.710696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.439 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.711248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.711282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.711848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.711879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.712477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.713057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.713089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.713693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.713723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.714174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.714189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.714657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.715236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.715268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.715806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.715842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.716207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.716238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.716793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.716823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.717384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.717899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.717930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.718429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.718960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.718974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.719451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.719482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.719900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.719930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.720486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.720502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.721053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.721085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.721619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.721649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.706 [2024-07-25 14:54:49.722210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.706 [2024-07-25 14:54:49.722241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.706 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.722795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.722825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.723317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.723349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.723835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.723865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.724343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.724358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.724899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.724929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.725506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.725539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.726031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.726077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.726655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.726686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.727169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.727201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.727707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.727737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.728274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.728306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.728851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.728865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.729239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.729253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.729710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.729724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.730147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.730165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.730634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.730648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.730944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.730958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.731442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.731474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.732029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.732082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.732533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.732547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.733064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.733078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.733577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.733591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.734038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.734059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.734578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.734592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.735372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.735388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.735843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.735873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.736444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.736475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.737053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.737085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.737633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.737648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.738116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.738130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.738673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.738687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.739146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.739162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.739685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.739715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.707 qpair failed and we were unable to recover it. 00:27:29.707 [2024-07-25 14:54:49.740231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.707 [2024-07-25 14:54:49.740246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.740772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.740802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.741302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.741317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.741761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.741775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.742187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.742201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.742668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.742682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.743151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.743183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.743732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.743761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.744245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.744260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.744697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.744711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.745002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.745016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.745552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.745566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.746065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.746080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.746577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.746591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.746982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.746996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.747517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.747549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.748103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.748118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.748555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.748568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.748952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.748981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.749491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.749523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.750020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.750060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.750465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.750479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.750944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.750979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.751562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.751930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.751960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.752445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.752459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.752910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.752941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.753468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.753482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.753942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.753956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.754501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.754516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.755019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.755033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.755506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.755545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.756018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.756060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.756634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.756664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.757156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.757171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.757689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.757718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.758213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.758244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.758792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.708 [2024-07-25 14:54:49.758806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.708 qpair failed and we were unable to recover it. 00:27:29.708 [2024-07-25 14:54:49.759269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.759283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.759807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.759821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.760319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.760333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.760742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.760772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.761255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.761270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.761714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.761728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.762025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.762039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.762538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.763073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.763087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.763523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.763536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.764066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.764097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.764573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.764608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.765161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.765176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.765690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.765704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.766231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.766246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.766616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.766630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.767004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.767017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.767550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.767564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.768062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.768077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.768600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.768613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.769066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.769080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.769548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.769562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.770021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.770036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.770554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.770568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 [2024-07-25 14:54:49.771036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.709 [2024-07-25 14:54:49.771056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:29.709 qpair failed and we were unable to recover it. 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Read completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.709 Write completed with error (sct=0, sc=8) 00:27:29.709 starting I/O failed 00:27:29.710 Write completed with error (sct=0, sc=8) 00:27:29.710 starting I/O failed 00:27:29.710 [2024-07-25 14:54:49.771367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:29.710 [2024-07-25 14:54:49.771983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.772006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.772477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.772493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.772908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.772923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.773387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.773403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.773921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.773935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.774412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.774427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.774957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.774971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.775511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.775528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.776054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.776069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.776615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.776631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.777135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.777150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.777678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.777693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.778185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.778199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.778649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.778663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.779111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.779125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.779664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.779678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.780202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.780217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.780646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.781153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.781168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.781668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.781698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.782306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.782337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.782903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.782933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.783492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.783523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.784106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.784694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.784724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.785218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.785232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.785730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.785744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.786247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.786278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.786860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.786889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.787454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.787485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.788017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.788054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.788587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.788617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.789196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.789227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.789785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.789820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.790381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.790412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.790970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.710 [2024-07-25 14:54:49.790999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.710 qpair failed and we were unable to recover it. 00:27:29.710 [2024-07-25 14:54:49.791485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.791517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.792106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.792138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.792616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.792646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.793215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.793245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.793795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.793825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.794406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.794437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.794963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.794994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.795539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.795570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.796161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.796194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.796741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.796770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.797371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.797401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.797958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.797989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.798508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.798538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.799100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.799131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.799684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.799713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.800140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.800155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.800681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.800711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.801306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.801337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.801915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.801944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.802528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.802559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.803148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.803163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.803565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.803594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.804151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.804182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.804678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.804707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.805245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.805276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.805833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.805863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.806441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.806472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.807059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.807091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.807647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.807676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.808234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.808249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.711 qpair failed and we were unable to recover it. 00:27:29.711 [2024-07-25 14:54:49.808747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.711 [2024-07-25 14:54:49.808760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.809291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.809323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.809922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.809951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.810495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.810525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.811030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.811068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.811623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.811653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.812206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.812237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.812729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.812764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.813324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.813356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.813977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.814006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.814603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.814651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.815230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.815261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.815841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.815871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.816454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.816485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.817029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.817070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.817689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.817718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.818197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.818228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.818715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.818745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.819300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.819331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.819854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.820417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.820448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.821041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.821088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.821584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.821598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.822136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.822168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.822772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.822802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.823388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.823420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.824007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.824036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.824642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.824673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.825273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.825305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.825923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.825953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.826522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.826553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.827144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.827175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.827770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.827800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.828399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.828430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.829010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.829040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.829617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.829647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.830221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.830252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.712 qpair failed and we were unable to recover it. 00:27:29.712 [2024-07-25 14:54:49.830829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.712 [2024-07-25 14:54:49.830859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.831445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.831475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.832069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.832100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.832693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.833300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.833331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.833932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.833961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.834545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.834559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.835116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.835147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.835641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.835671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.836228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.836259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.836744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.836780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.837339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.837371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.837945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.837975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.838592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.838624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.839101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.839131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.839718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.839748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.840293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.840829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.841462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.841493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.842082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.842114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.842717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.842747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.843320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.843334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.843839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.843853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.844449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.844480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.845062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.845639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.845669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.846232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.846264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.846778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.846808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.847345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.847376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.847907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.847937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.848470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.848501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.849064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.849095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.849678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.849707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.850294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.850326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.850836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.850866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.851402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.851433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.852012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.852041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.713 qpair failed and we were unable to recover it. 00:27:29.713 [2024-07-25 14:54:49.852643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.713 [2024-07-25 14:54:49.852674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.853177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.853217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.853692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.853722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.854303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.854334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.854863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.854898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.855386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.855417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.856004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.856034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.856586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.856601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.857120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.857135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.857622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.857636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.858166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.858197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.858786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.858816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.859333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.859365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.859856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.859890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.860446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.860477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.860984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.861583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.861615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.862117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.862148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.862686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.862716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.863298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.863329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.863926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.863957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.864540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.864571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.865068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.865099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.865668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.865699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.866282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.866313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.866839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.866869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.867420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.867451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.867970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.868001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.868559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.868590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.869167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.869182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.869727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.869758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.870247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.870279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.870846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.870876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.871436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.871468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.872027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.872066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.872609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.872639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.873194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.873225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.714 [2024-07-25 14:54:49.873821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.714 [2024-07-25 14:54:49.873836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.714 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.874402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.874433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.875027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.875069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.875655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.875669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.876138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.876153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.876688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.876703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.877235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.877268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.877840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.877870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.878430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.879063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.879094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.879545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.879575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.880075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.880106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.880652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.880668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.881191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.881223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.881855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.881886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.882372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.882387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.882847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.882866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.883435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.883466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.883895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.883925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.884471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.884502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.885120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.885159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.885589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.885604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.886124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.886158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.886747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.886781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.887391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.887422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.888041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.888081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.888593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.888623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.889190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.889222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.889793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.889823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.890391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.890423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.890942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.890973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.891517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.891548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.892089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.892709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.892739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.893338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.715 [2024-07-25 14:54:49.893370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.715 qpair failed and we were unable to recover it. 00:27:29.715 [2024-07-25 14:54:49.893923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.893953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.894489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.894520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.895099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.895129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.895622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.895652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.896252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.896283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.896727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.896756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.897310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.897341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.897850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.897880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.898427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.898458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.898875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.898904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.899460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.899492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.900001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.900030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.900576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.900606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.901229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.901242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.901750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.901780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.902382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.902414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.902976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.903005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.903595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.903626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.904150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.904181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.904735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.905374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.905404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.905834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.905869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.906464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.906495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.906977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.907006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.907556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.907586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.908182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.908213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.908703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.908733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.909233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.909263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.909801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.909831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.910388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.910403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.910955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.910984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.911512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.911543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.912050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.912081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.912642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.912672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.913221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.913253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.716 qpair failed and we were unable to recover it. 00:27:29.716 [2024-07-25 14:54:49.913793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.716 [2024-07-25 14:54:49.913823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.914357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.914389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.914924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.914954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.915517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.915547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.916135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.916167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.916682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.916714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.917278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.917308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.917900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.918507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.918521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.919063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.919094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.919605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.919636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.920195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.920226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.920657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.920687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.921245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.921277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.921768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.921797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.922391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.922423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.923012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.923050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.923582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.923611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.924185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.924217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.924783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.924813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.925405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.925436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.925984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.926014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.926561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.926592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.927158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.927189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.927721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.927750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.928311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.928342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.928859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.928893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.929486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.929517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.930128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.930158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.930745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.930775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.931359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.931390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.931980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.932010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.932614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.932645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.933251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.933282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.933863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.933896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.934491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.717 [2024-07-25 14:54:49.935083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.717 [2024-07-25 14:54:49.935115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.717 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.935696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.935725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.936321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.936368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.936932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.936962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.937527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.937559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.938096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.938128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.938683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.938713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.939269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.939864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.939895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.940460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.940491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.941075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.941105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.941615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.941645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.942201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.942232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.942749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.942779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.943338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.943368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.943932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.943962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.944510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.944541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.945108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.945140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.945700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.945729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.946227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.946258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.946752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.946782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.947344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.947376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.947843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.947873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.948411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.948441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.949032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.949072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.949654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.949668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.950138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.950169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.950684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.950714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.951281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.951312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.951875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.951905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.952369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.952416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.952932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.952963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.953568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.953599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.718 [2024-07-25 14:54:49.954188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.718 [2024-07-25 14:54:49.954219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.718 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.954802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.954831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.955418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.955449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.956034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.956088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.956661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.956691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.957271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.957302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.957865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.957894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.958474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.958506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.959109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.959124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.959596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.959626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.960187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.960218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.960803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.960833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.961335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.961366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.961849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.961879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.962438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.962469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.963020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.963061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.963650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.964243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.964274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.964832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.964862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.965437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.965467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.966027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.966066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.966640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.966670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.967237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.967267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.967850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.967880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.968381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.968413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.968912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.968941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.969476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.969507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.969983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.970013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.970516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.970547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.719 [2024-07-25 14:54:49.971054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.719 [2024-07-25 14:54:49.971084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.719 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.971641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.971671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.972254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.972285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.972854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.972884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.973471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.973502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.974023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.974061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.974642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.974671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.975184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.975214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.975775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.975810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.976398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.976429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.977023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.977070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.977578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.977607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.978188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.978219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.978787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.978816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.979411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.979426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.979953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.979966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.980522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.980537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.981087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.981118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.981700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.981714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.982264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.982278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.982838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.982853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.983374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.983405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.983971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.984567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.984581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.985039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.985059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.985460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.985474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.985995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.986025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.986562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.986576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.987086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.987101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.987553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.987567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.988026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.988039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.988507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.988521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.989053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.989084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.989662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.989676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.990171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.990186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.720 [2024-07-25 14:54:49.990722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.720 [2024-07-25 14:54:49.990737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.720 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.991252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.991269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.991844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.991858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.992389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.992404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.992917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.992947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.993480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.993511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.994014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-07-25 14:54:49.994051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 14:54:49.994589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.994618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.995170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.995185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.995677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.995707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.996260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.996291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.996875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.997496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.997511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.998034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.998058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.998580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.998594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.998975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.998989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:49.999506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:49.999521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.000110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.000140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.000709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.000739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.001318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.001332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.001740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.001756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.002235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.002249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.002727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.002742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.003257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.003272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.003705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.003720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.004307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.004322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.004867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.004882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.005713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.005764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.006269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.006291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.006753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.006769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.007280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.007296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.007844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.007859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.008381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.008396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.008916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.008930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.009416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.009431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.009978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.009992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.010447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.010915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.011414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.011430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.011846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.011862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.012288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.012308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.012754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.012768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.013378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.013393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.013860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-07-25 14:54:50.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-07-25 14:54:50.014327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.014343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.014735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.014749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.015312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.015329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.015820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.015843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.016349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.016374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.016823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.016838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.017274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.017297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.017744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.017759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.018161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.018177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.018621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.018634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.019111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.019126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.019574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.019588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.020040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.020060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.020457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.020471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.020860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.020874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.021363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.021378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.021806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.022310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.022324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.022695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.022709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.023144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.023159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.023589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.023602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.024127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.024142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.024522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.024535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.025052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.025068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.025495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.025509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.026026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.026041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.026495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.026509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.026993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.027006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.027443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.027457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.027927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.027941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.028384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.028399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.028775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.028790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.029175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.029189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.029682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.029696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.030142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.030156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.030667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.030681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.031251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.031269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-07-25 14:54:50.031784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-07-25 14:54:50.031798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.032314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.032328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.032782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.032796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.033309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.033323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.033771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.033786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.034303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.034318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.034845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.034859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.035410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.035426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.035960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.035974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.036508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.036523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.037064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.037078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.037567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.037581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.038096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.038111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.038577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.038591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.038965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.038994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.039551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.039586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.040014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.040028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.040553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.040586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.041066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.041097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.041577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.041607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.042087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.042118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.042588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.042602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.043111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.043143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.043671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.044252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.044283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.044761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.044791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.045319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.045350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.045875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.045906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.046459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.046490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.046978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.047008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.047541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.047574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.048065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.048096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.048672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.048703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.049221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.049253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.049829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.049859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.050353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.050384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.050869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.050900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-07-25 14:54:50.051366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-07-25 14:54:50.051398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.051921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.051951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.052510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.052548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.053080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.053112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.053646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.054146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.054177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.054722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.054751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.055206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.055238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.055707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.055736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.056284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.056315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.056837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.056867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.057334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.057366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.057938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.057968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.058552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.058584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.059116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.059148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.059566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.059596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.060150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.060182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.060384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.060413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.060937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.060968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.061425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.061457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.062002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.062031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.062590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.062620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.063190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.063222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.063722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.063752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.064220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.064251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.064797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.064826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.065315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.065347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.065915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.065945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.066417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.066448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.066914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.066944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.067196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.067229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.067700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.067730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.068183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.068214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.068722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.068752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.069307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.069339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.069833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.069863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.070383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.070415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.070978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-07-25 14:54:50.071008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-07-25 14:54:50.071543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.071575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.072055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.072086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.072554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.072584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.073063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.073095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.073615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.073650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.074200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.074231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.074772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.075212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.075243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.075729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.075758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.076261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.076293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.076796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.076835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.077379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.077393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.077821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.077851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.078330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.078361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.078830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.078844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.079283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.079297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.079896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.079909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.080395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.080409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.080924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.080938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.081383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.081397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.081843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.081857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.082424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.082439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.082986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.083000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.083608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.083700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.084350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.084378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.084911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.084927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.085315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.085329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.085716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.085730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.086241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.086256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.086696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.086709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.087244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-07-25 14:54:50.087258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-07-25 14:54:50.087687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.087700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.088165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.088179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.088572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.088586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.089082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.089096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.089607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.089620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.090137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.090151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.090711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.090725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.091232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.091246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.091753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.091767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.092199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.092213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.092720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.092734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.093287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.093301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.093821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.093835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.094373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.094390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.094884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.094899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.095414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.095428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.095969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.095982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.096442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.096456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.096964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.096977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.097451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.097465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.097921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.097934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.098425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.098440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.098925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.098939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.099453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.099467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.100149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.100163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.100618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.100633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.101142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.101156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.101641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.101655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.102141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.102155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.102671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.102685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.103242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.103257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.103757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.103770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.104283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.104297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.104845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.104859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.105312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.105343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.105856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.105893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.106413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.106427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-07-25 14:54:50.106960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-07-25 14:54:50.106989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.107467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.107498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.107961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.107991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.108528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.108559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.109151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.109182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.109692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.109722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.110194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.110208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.110721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.110734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.111281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.111314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.111880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.111910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.112440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.112470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.112964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.112994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.113572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.113603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.114180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.114211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.114762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.114792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.115344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.115374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.115952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.115987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.116581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.116613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.117164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.117195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.117761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.117791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.118363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.118394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.118877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.118908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.119451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.119480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.120092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.120123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.120649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.120679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.121249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.121279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.121823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.121853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.122365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.122396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.122926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.122956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.123502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.123533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.124113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.124145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.124683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.124712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.125282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.125313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.125874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.125903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.126398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.126429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.126909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.126923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.127436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.128124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-07-25 14:54:50.128155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-07-25 14:54:50.128721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.128751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.129242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.129273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.129749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.129779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.130353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.130384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.130930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.130959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.131524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.131555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.132131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.132163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.132646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.132660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.133102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.133133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.133622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.133651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.134200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.134231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.134777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.134807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.135308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.135339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.135885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.135914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.136493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.136525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.137008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.137037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.137495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.137526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.138085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.138116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.138692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.138727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.139300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.139332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.139919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.139949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.140558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.140590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.141142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.141156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.141674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.141688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.142230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.142261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.142834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.142848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.143304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.143318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.143758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.143788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.144311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.144359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.144911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.144941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.145454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.145485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.145951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.145981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.146495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.146509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.147061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.147092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.147610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.147640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.148185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.148216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.148796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.148826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-07-25 14:54:50.149406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-07-25 14:54:50.149438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.149935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.149965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.150515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.150546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.151062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.151095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.151658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.151688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.152261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.152293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.152823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.152854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.153458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.153489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.153987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.154017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.154220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02010 is same with the state(5) to be set 00:27:29.997 [2024-07-25 14:54:50.154876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.154948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.155558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.155595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.156236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.156251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.156769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.156783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.157305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.157917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.157947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.158527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.158558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.159135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.159165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.159742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.159771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.160348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.160362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.160866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.160896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.161448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.161479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.162030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.162667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.162697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.163287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.163317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.163858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.163887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.164307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.164337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.164862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.164892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.165419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.165449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.165984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.166014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.166578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.166608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.167110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.167597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.167627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.168114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.168145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.168736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.168765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.169335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.169371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.169923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.169952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.170497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.170527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-07-25 14:54:50.171060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-07-25 14:54:50.171090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.171690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.171719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.172236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.172266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.172827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.172856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.173435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.173466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.174055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.174086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.174673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.174703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.175299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.175329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.175905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.176469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.176499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.177079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.177111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.177689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.177719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.178315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.178346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.178924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.178954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.179466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.179497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.180058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.180089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.180656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.180686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.181282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.181314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.181910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.181939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.182530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.182561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.183074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.183117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.183645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.184256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.184304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.184877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.184906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.185460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.185491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.186025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.186065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.186616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.186645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.187221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.187252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.187822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.187851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.188454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.188484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.188972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.189002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.189572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-07-25 14:54:50.189602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-07-25 14:54:50.190181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.190212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.190727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.190757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.191286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.191317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.191925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.191954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.192445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.192476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.193006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.193040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.193602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.193632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.194195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.194226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.194799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.194828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.195331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.195362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.195929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.195957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.196447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.196478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.197028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.197067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.197638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.197667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.198177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.198192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.198677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.198707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.199195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.199226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.199730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.199759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.200300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.200330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.200918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.200948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.201442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.201474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.202029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.202068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.202678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.202707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.203302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.203316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.203881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.203911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.204504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.204535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.205131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.205169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.205760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.205790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.206274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.206305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.206863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.206892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.207443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.207474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.208000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.208014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.208552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.208583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.209141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.209172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.209754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.209790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.210322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.210337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.210830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.210860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-07-25 14:54:50.211450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-07-25 14:54:50.211464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.211971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.211985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.212475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.212506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.213067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.213098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.213680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.213710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.214208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.214239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.214709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.214738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.215293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.215324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.215910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.215945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.216509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.216523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.217050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.217081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.217644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.217673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.218228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.218258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.218704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.218734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.219281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.219313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2485872 Killed "${NVMF_APP[@]}" "$@" 00:27:30.000 [2024-07-25 14:54:50.219894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.219924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.220517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.220532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:30.000 [2024-07-25 14:54:50.220981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.220996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.000 [2024-07-25 14:54:50.221533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.221548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.000 [2024-07-25 14:54:50.222060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.222076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.222599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.222613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.223146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.223161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.223666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.223679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.224213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.224227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.224747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.224761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.225311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.225326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.225775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.225789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.226291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.226306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.226852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.226867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.227335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.227350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.227886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.227901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-07-25 14:54:50.228473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.228488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2486619 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2486619 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:30.000 [2024-07-25 14:54:50.229027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-07-25 14:54:50.229047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2486619 ']' 00:27:30.000 [2024-07-25 14:54:50.229496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.001 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.001 [2024-07-25 14:54:50.230031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.230051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.001 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.001 [2024-07-25 14:54:50.230561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.230576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 14:54:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.001 [2024-07-25 14:54:50.231153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.231168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.231662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.231676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.232110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.232124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.232580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.232594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.233071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.233087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.233643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.234131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.234146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.234615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.234630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.235088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.235103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.235580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.235594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.236148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.236689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.236704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.237204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.237220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.237675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.237689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.238155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.238170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.238649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.238664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.239164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.239179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.239646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.239660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.240205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.240220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.240774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.240789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.241239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.241254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.241817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.241831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.242379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.242395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.242836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.242852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.243398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.243413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.243890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.243904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.244385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.244400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.244947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.244962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.245448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.245463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.245968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.245982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.246495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.246510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-07-25 14:54:50.246962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-07-25 14:54:50.246976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.247530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.247547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.248085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.248618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.248632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.249024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.249039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.249574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.249591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.250141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.250156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.250677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.250691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.251146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.251161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.251685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.251699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.252164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.252178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.252726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.252740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.253261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.253275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.253775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.253789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.254314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.254332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.254854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.254868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.255381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.255395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.255914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.255928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.256465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.256480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.256925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.256939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.257450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.257464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.257925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.258470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.258484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.259014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.259028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.259547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.259561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.260089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.260104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.260546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.260560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.261021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.261035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.261446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.261459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.261909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.261923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.262415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.262429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.262952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.262965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.263489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.263504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-07-25 14:54:50.263973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-07-25 14:54:50.263987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.264454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.264469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.264945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.264958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.265478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.265492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.265933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.265947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.266440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.266454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.266952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.266966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.267367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.267381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.267827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.267842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.268361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.268376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.268745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.268759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.269277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.269292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.269736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.269750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.270248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.270263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.270694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.270708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.271151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.271166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.271610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.271624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.272168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.272183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.272626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.272640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.273139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.273155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-07-25 14:54:50.273596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-07-25 14:54:50.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.274126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.274144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.274688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.274702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.275261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.275276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.275726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.275739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.276256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.276270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.276790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.276805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.277265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.277279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.277741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.277755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.278120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.278629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.278643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.279083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.279098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.279539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.270 [2024-07-25 14:54:50.279553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.270 qpair failed and we were unable to recover it. 00:27:30.270 [2024-07-25 14:54:50.280024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.280039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.280099] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:27:30.271 [2024-07-25 14:54:50.280155] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.271 [2024-07-25 14:54:50.280582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.280600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.281096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.281110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.281646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.281661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.282103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.282119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.282633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.282649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.283144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.283159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.283683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.283698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.283955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.283970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.284343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.284357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.284791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.284805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.285299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.285313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.285700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.285714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.286230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.286244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.286715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.286730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.287272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.287287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.287810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.287826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.288206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.288220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.288663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.288677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.289167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.289620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.289634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.290103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.290118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.290570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.290584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.291252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.291266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.291784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.291798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.291977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.291991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.292444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.292458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.292915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.292932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.293324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.293340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.293835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.293850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.294344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.294359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.294745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.294759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.295249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.295263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.295754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.295768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.296000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.296014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.296457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.271 [2024-07-25 14:54:50.296472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.271 qpair failed and we were unable to recover it. 00:27:30.271 [2024-07-25 14:54:50.296908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.296922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.297358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.297373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.297809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.297823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.298339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.298353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.298826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.298840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.299214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.299229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.300021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.300037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.300484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.300499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.301011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.301025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.301525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.301540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.301991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.302186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.302201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.302926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.302942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.303379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.303393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.303715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.303730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.304220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.304235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.304609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.304626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.305022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.305037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.305509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.305524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.306018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.306033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.306501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.306516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.306891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.306904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.307328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.307343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.307798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.307811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.308236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.308251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.308710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.308726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.309161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.309175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.272 [2024-07-25 14:54:50.309536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.309552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.310049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.310064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.310500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.310514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.310891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.310905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.311075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.311089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.311531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.311556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.312001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.312014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.312509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.312523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.313011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.313026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.272 qpair failed and we were unable to recover it. 00:27:30.272 [2024-07-25 14:54:50.313540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-07-25 14:54:50.313556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.314053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.314068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.314558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.314572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.315103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.315118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.315520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.315534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.316510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.316525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.316696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.316709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.317156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.317173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.317672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.317686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.318115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.318130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.318585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.318599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.318976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.318990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.319492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.319506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.319943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.319957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.320378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.320828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.320842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.321350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.321365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.321823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.321837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.322224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.322238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.322751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.322765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.323251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.323266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.323761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.323775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.324228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.324242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.324731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.324745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.325119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.325134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.325559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.325573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.326080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.326094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.326527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.326540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.326980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.326994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.327481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.327495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.327984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.327997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.328483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.328497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.328941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.328955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.329443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.329457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82a8000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.329932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.329966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.330394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-07-25 14:54:50.330407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.273 qpair failed and we were unable to recover it. 00:27:30.273 [2024-07-25 14:54:50.330889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.330899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.331159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.331169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.331622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.332126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.332136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.332503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.332512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.332932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.332942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.333350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.333361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.333807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.333817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.334180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.334191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.334617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.334627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.335054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.335065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.335444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.335457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.335937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.335948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.336462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.336472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.337034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.337047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.337563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.337573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.338130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.338565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.338575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.339059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.339070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.339586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.339596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.340100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.340110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.340580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.340590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.341045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.341056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.341486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.341496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.341952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.341962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.342481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.342492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.343518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.343529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.343908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.343918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.344345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.344356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.345076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.345088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.345540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.345550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.345982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.345992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.346242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.346728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.346738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.347111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.347122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.347546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.274 [2024-07-25 14:54:50.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-25 14:54:50.347977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.347987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.348419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.348429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.348845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.348855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.349294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.349304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.349728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.349738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.350104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.350114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.350618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.350627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.350889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.275 [2024-07-25 14:54:50.351137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.351149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.351633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.351643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.351940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.351950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.352378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.352389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.352753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.352763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.353080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.353091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.353408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.353418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.353899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.353909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.354401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.354411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.354913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.354924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.355368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.355379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.355810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.355820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.356302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.356313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.356792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.356802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.357267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.357278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.357781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.357792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.358228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.358239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.358655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.358666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.359168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.359180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.359551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.359562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.360065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.360078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.360511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.360522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.361000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.361011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-25 14:54:50.361234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.275 [2024-07-25 14:54:50.361245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.361666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.361677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.362058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.362070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.362595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.362607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.363108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.363121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.363610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.363621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.364126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.364136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.364581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.364591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.365089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.365100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.365596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.365606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.366086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.366096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.366544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.366554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.366979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.366989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.367432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.367442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.367869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.367879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.368324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.368335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.368814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.368824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.369191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.369201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.369626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.369636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.370111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.370122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.370533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.370542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.371019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.371030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.371499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.371510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.371875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.371885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.372295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.372307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.372823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.372833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.373335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.373346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.373829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.373840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.374342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.374353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.374783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.374793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.375237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.375248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.375604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.375614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.376041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.376056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.376532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.376542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.377041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.377057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.377498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.377509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.377936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.377946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-25 14:54:50.378100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.276 [2024-07-25 14:54:50.378111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.378576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.378586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.379068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.379078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.379520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.379530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.379967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.379977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.380427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.380437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.380871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.380881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.381311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.381322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.381743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.381753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.382251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.382261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.382688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.382698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.383200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.383211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.383581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.383591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.384081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.384091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.384583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.384594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.385018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.385028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.385479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.385490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.385967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.385977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.386395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.386406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.386908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.386919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.387349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.387360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.387718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.387728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.388157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.388168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.388616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.388626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.389109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.389120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.389485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.389495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.389951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.389961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.390437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.390450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.390951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.390961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.391444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.391454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.391935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.391945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.392422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.392433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.392933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.392944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.393415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.393425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.393953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.394395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.394406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.394824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.394834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.277 qpair failed and we were unable to recover it. 00:27:30.277 [2024-07-25 14:54:50.395255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.277 [2024-07-25 14:54:50.395266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.395742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.395752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.396130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.396141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.396631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.396641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.397063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.397073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.397497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.397507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.397962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.397972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.398448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.398458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.398959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.398969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.399338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.399349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.399795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.399807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.400265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.400279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.400760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.400775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.401268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.401282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.401786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.401798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.402307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.402319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.402827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.402841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.403276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.403288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.403769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.403780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.404259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.404270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.404748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.404759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.405237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.405247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.405675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.405685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.406166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.406177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.406600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.406610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.407034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.407047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.407526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.407537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.408039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.408055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.408531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.408542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.409049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.409061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.409562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.409577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.410063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.410074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.410501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.410511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.410990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.411000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.411429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.411440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.411920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.411930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.412304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.412314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.278 [2024-07-25 14:54:50.412812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.278 [2024-07-25 14:54:50.412821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.278 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.413072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.413082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.413559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.413569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.414068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.414078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.414460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.414470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.414972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.414982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.415410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.415421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.415793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.415804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.415949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.415959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.416312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.416322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.416801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.416811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.417318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.417328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.417767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.417777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.418275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.418285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.418748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.418759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.419235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.419245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.419723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.420153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.420163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.420662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.420672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.421101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.421111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.421571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.421581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.422012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.422022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.422449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.422460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.422905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.422915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.423414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.423425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.423840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.423849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.424226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.424237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.424662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.424672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.425086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.425096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.425570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.425580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.426081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.426091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.426515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.426524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.426893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.426903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.427392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.427405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.427822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.428248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.428259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.279 [2024-07-25 14:54:50.428686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.279 [2024-07-25 14:54:50.428696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.279 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.429195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.429205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.429630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.429640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.430105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.430115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.430606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.430616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.430999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.431008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.431427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.431437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.431817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.431826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.432271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.432780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.432790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.433266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.433276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.433694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.433704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.434079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.434090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.434439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.434448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.434929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.434939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.435419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.435430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.435855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.435865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.436360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.436370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.436767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.436777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.437184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.437194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.437691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.437701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.438116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.438127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.438637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.438648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.439088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.439099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.439601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.439611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.440113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.440123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.440332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.440341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.440769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.440779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.441133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.441143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.441566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.442056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.442066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.442275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.442285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.442760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.442770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.280 qpair failed and we were unable to recover it. 00:27:30.280 [2024-07-25 14:54:50.443090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.280 [2024-07-25 14:54:50.443101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.443604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.443614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.444034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.444049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.444421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.444431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.444932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.444944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.445381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.445391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.445748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.445758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.446212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.446222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.446737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.446747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.447268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.447279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.447709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.447720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.448170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.448182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.448658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.448668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.449096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.449107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.449545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.449972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.449982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.450485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.450496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.450950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.450960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.451321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.451332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.451833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.451844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.452141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.281 [2024-07-25 14:54:50.452183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.281 [2024-07-25 14:54:50.452195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.281 [2024-07-25 14:54:50.452204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.281 [2024-07-25 14:54:50.452212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.281 [2024-07-25 14:54:50.452289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.452299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.452335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:30.281 [2024-07-25 14:54:50.452447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:30.281 [2024-07-25 14:54:50.452556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:30.281 [2024-07-25 14:54:50.452558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.281 [2024-07-25 14:54:50.452785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.452794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.453293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.453304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.453779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.453789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.454296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.454307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.454782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.454792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.455299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.455310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.455488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.455498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.456001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.456012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.456525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.456536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.457019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.457029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.457454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.457465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.457633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.457643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.458064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.458075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.281 qpair failed and we were unable to recover it. 00:27:30.281 [2024-07-25 14:54:50.458581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.281 [2024-07-25 14:54:50.458591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.458968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.458979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.459430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.459441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.459901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.459912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.460612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.460626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.460772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.460783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.461285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.461296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.461755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.461766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.462269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.462281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.462644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.462655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.463136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.463573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.463585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.463955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.463965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.464391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.464402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.464860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.464872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.465294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.465305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.465721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.465732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.466212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.466224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.466647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.466658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.467117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.467129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.467451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.467465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.467911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.467923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.468342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.468354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.468579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.468590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.469019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.469030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.469516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.469529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.470022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.470033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.470520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.470532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.470955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.470966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.471391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.471403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.471823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.471834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.472310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.472323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.472746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.472757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.473210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.473221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.473657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.473970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.473981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.474415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.474428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.282 qpair failed and we were unable to recover it. 00:27:30.282 [2024-07-25 14:54:50.474934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.282 [2024-07-25 14:54:50.474944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.475469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.475482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.475912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.475923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.476401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.476413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.476888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.476899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.477345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.477356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.477855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.477866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.478300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.478312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.478678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.478689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.479205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.479647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.479658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.480103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.480113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.480613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.480623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.481052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.481063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.481429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.481439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.481938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.481949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.482431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.482441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.482813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.482823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.483304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.483316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.483793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.483803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.484255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.484266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.484769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.484780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.485259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.485270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.485749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.485763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.486289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.486300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.486673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.486683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.487108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.487119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.487548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.487558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.488011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.488021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.488455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.488467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.488885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.488895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.489379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.489389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.489830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.489840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.489998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.490008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.490261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.490271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.490776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.490786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.491198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.491208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.283 [2024-07-25 14:54:50.491579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.283 [2024-07-25 14:54:50.491589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.283 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.492066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.492076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.492555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.492565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.492937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.492947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.493445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.493455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.493883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.493893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.494646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.494657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.495141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.495152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.495562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.495573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.496052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.496063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.496542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.496553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.497029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.497039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.497410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.497421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.497934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.497945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.498302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.498313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.498679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.498689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.499195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.499206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.499635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.499647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.500147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.500159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.500530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.500541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.500995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.501005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.501447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.501457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.501882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.501892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.502328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.502844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.503359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.503371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.503793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.503803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.504258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.504268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.504958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.504969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.505398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.505409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.505837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.505847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.506326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.506336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.506777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.284 [2024-07-25 14:54:50.506787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.284 qpair failed and we were unable to recover it. 00:27:30.284 [2024-07-25 14:54:50.507232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.507243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.507602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.507612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.508132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.508143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.508451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.508461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.508939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.508949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.509424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.509911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.509921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.510354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.510366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.510802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.510812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.511268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.511278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.511761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.511770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.512234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.512244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.512742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.512752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.513181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.513627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.513637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.514073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.514084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.514585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.514595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.515030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.515040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.515473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.515483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.515905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.515915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.516415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.516427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.516807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.516817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.517255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.517265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.517684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.517693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.518050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.518060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.518427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.518437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.518744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.518754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.519259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.519269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.519705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.519715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.520151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.520161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.520594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.520604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.520970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.521486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.521496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.521953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.521963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.522444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.522454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.522873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.522882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.523362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.285 [2024-07-25 14:54:50.523373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.285 qpair failed and we were unable to recover it. 00:27:30.285 [2024-07-25 14:54:50.523872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.523882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.524176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.524186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.524638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.524648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.525070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.525080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.525339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.525348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.525824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.525834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.526314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.526325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.526741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.526751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.527174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.527184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.527674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.527684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.528116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.528127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.528580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.528590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.529092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.529102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.529519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.529529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.529993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.530003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.530424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.530435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.530850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.530860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.531524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.531534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.532018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.532027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.532546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.532557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.533030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.533040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.533497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.533508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.533992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.534478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.534491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.535011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.535021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.535440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.535450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.535948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.535957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.536411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.536421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.536785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.536795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.537212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.537222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.537697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.537707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.537960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.537970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.538405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.538415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.538910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.538920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.539349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.539359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.539836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.539846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.540194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.286 [2024-07-25 14:54:50.540204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.286 qpair failed and we were unable to recover it. 00:27:30.286 [2024-07-25 14:54:50.540725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.540735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.541237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.541247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.541701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.541711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.542152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.542162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.542583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.543097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.543107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.543482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.543920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.543931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.544341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.544351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.544852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.544862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.545318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.545328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.545696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.545707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.546207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.546217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.546649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.546659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.547078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.547088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.547590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.547600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.548077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.548088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.548563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.548573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.549073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.549083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.549592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.549601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.550097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.550108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.550607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.550617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.551064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.551074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.551495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.551505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.552006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.552015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.552493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.552503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.552931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.552943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.553420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.553430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.553904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.553914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.287 [2024-07-25 14:54:50.554414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.287 [2024-07-25 14:54:50.554424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.287 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.554787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.554797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.555171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.555181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.555529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.555538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.556034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.556047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.556497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.556507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.556819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.556828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.557252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.557262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.557718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.557728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.558087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.558097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.558572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.558582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.558995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.559005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.559367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.559377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.559874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.559883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.560364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.560374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.560851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.560861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.561290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.561301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.561731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.561741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.562242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.562252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.562729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.562738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.563196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.563206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.563707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.563717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.564167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.564177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.576 [2024-07-25 14:54:50.564620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.576 [2024-07-25 14:54:50.564630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.576 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.565008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.565018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.565472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.565482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.565855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.565864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.566343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.566353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.566849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.566859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.567280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.567291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.567658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.567667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.568166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.568176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.568572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.568582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.568999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.569009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.569473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.569483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.569982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.569992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.570422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.570432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.570934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.570946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.571427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.571437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.571941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.571951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.572428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.572438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.572850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.572860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.573023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.573033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.573488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.573498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.573946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.573956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.574388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.574398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.574834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.574844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.575349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.575359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.575770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.575779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.576231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.576242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.576598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.576608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.577094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.577104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.577601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.577610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.578039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.578054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.578492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.578502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.577 [2024-07-25 14:54:50.578950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.577 [2024-07-25 14:54:50.578960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.577 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.579245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.579256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.579734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.579744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.580168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.580179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.580681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.580691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.581129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.581139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.581616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.581626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.582134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.582144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.582563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.582573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.583053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.583063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.583548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.583557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.583993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.584003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.584437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.584448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.584873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.584882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.585314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.585325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.585754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.585764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.586242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.586252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.586754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.586764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.587194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.587204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.587562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.587571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.588001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.588010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.588485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.588495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.588990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.589002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.589434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.589445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.589927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.589936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.590413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.590832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.590842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.591350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.591360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.591838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.591848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.592268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.592278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.592700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.592710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.593160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.593170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.593671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.593681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.594097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.578 [2024-07-25 14:54:50.594107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.578 qpair failed and we were unable to recover it. 00:27:30.578 [2024-07-25 14:54:50.594534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.594544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.595020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.595031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.595525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.595536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.595958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.595968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.596468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.596478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.596912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.596922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.597414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.597424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.597902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.597912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.598326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.598336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.598524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.598534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.598831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.598841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.599258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.599269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.599686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.599696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.600146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.600157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.600367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.600377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.600743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.600752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.601104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.601114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.601620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.601630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.602051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.602062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.602424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.602434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.602814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.603132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.603142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.603519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.603529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.603959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.603969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.604467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.604477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.604955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.604965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.605443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.605453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.605953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.605963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.606470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.606482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.606940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.606950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.607396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.607406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.607853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.607862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.608315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.608326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.608746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.579 [2024-07-25 14:54:50.608756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.579 qpair failed and we were unable to recover it. 00:27:30.579 [2024-07-25 14:54:50.609233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.609244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.609626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.609635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.610076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.610087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.610467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.610477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.610960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.610970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.611392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.611402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.611721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.611731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.612227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.612237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.612741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.612751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.613167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.613178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.613653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.613663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.614024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.614034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.614527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.614537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.615035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.615048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.615525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.615535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.615939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.615949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.616454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.616465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.616967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.616977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.617403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.617413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.617825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.617835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.618263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.618273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.618701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.618711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.619214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.619225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.619386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.619396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.619829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.619839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.620274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.620284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.620762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.620772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.621269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.621279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.621803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.621813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.622314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.622324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.622748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.622758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.622987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.622997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.623501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.623511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.623927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.623937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.624357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.624369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.624845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.580 [2024-07-25 14:54:50.624855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.580 qpair failed and we were unable to recover it. 00:27:30.580 [2024-07-25 14:54:50.625264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.625600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.625610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.626050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.626060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.626535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.626545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.627001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.627011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.627313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.627323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.627799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.627809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.628241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.628671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.629161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.629171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.629648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.630158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.630168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.630597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.630608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.631017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.631027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.631464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.631474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.631970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.631980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.632483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.632493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.632927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.632937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.633311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.633322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.633820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.633830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.634262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.634272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.634770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.634779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.635301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.635311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.635793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.635803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.636014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.636024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.636454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.636465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.636944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.636954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.637162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.637172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.637614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.637624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.638101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.581 [2024-07-25 14:54:50.638112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.581 qpair failed and we were unable to recover it. 00:27:30.581 [2024-07-25 14:54:50.638472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.638482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.638944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.638954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.639458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.639468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.639895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.639905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.640381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.640392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.640789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.640799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.641153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.641164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.641584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.641594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.642082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.642094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.642596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.642606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.643050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.643061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.643493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.643503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.643979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.643990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.644183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.644193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.644671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.644681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.645118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.645128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.645578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.645587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.646004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.646014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.646492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.646502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.646977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.646987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.647463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.647474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.648004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.648014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.648512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.648523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.649001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.649011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.649384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.649395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.649817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.649827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.650327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.650338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.650836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.650846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.651273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.651284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.651782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.651793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.652201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.652212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.652644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.652654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.653079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.653089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.653521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.653531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.582 [2024-07-25 14:54:50.654006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.582 [2024-07-25 14:54:50.654016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.582 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.654521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.654531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.655039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.655053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.655421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.655431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.655906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.655916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.656415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.656426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.656799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.656809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.657166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.657177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.657658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.657668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.658089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.658099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.658598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.658608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.659083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.659093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.659520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.659530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.659948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.659958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.660437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.660449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.660826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.660837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.661207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.661218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.661695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.661705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.661917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.661928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.662353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.662365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.662855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.662865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.663276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.663286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.663727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.663737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.664200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.664219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.664573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.664582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.665031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.665041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.665503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.665513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.666264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.666275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.666713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.666723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.667024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.667034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.667403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.667413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.667898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.667909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.668330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.668340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.668757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.668767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.669245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.583 [2024-07-25 14:54:50.669256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.583 qpair failed and we were unable to recover it. 00:27:30.583 [2024-07-25 14:54:50.669675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.669685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.670136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.670146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.670831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.670842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.671202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.671212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.671643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.671653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.672094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.672105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.672318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.672328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.672764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.672774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.673083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.673094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.673507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.673517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.673995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.674005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.674381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.674392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.674574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.674584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.675000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.675010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.675424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.675435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.675934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.675943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.676336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.676346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.676846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.676856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.677225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.677236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.677714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.677726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.678181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.678192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.678671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.678681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.679184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.679195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.679746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.679756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.680049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.680059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.680495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.680505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.680930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.680941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.681419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.681429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.681932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.681942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.682311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.682322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.682678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.682689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.683209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.683219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.683723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.683733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.684198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.684209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.684627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.584 [2024-07-25 14:54:50.684637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.584 qpair failed and we were unable to recover it. 00:27:30.584 [2024-07-25 14:54:50.685063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.685075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.685541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.685552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.685898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.685909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.686347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.686359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.686787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.686797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.687274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.687285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.687650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.687660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.688113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.688124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.688542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.688552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.688910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.688921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.689398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.689409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.689772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.689782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.690212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.690222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.690724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.690734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.691100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.691111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.691536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.691546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.692049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.692060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.692459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.692470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.692986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.692997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.693361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.693371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.693802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.693812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.694262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.694274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.694656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.694666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.695120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.695131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.695558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.695571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.695717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.696153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.696526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.696536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.696910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.696919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.697381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.697391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.697868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.697879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.698240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.698251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.698753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.698764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.699192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.699203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.699834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.699844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.585 [2024-07-25 14:54:50.700220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.585 [2024-07-25 14:54:50.700231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.585 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.700710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.700720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.701088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.701099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.701532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.701542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.701920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.701930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.702209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.702220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.702592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.702602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.703054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.703064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.703490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.703500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.703907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.703918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.704339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.704352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.704703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.704714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.705142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.705152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.705585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.705595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.705898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.705909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.706273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.706284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.706764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.706775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.707021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.707031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.707474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.707485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.707835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.707845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.708205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.708216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.708654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.708665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.709036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.709052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.709482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.709492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.709935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.709945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.710420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.710431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.710912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.710922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.711404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.711415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.711761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.711771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.712217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.712230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.586 qpair failed and we were unable to recover it. 00:27:30.586 [2024-07-25 14:54:50.712603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.586 [2024-07-25 14:54:50.712613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.713009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.713019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.713447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.713458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.713849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.713860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.714233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.714244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.714683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.714695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.715056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.715066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.715491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.715501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.715942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.715951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.716380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.716390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.716643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.716653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.717069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.717080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.717688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.717698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.718129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.718140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.718560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.718994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.719004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.719503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.719514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.720018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.720028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.720406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.720416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.720794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.720803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.721091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.721101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.721551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.721561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.721918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.721930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.722296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.722307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.722730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.722740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.723166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.723177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.723596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.723607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.724050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.724061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.724538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.724548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.724839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.587 [2024-07-25 14:54:50.724849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.587 qpair failed and we were unable to recover it. 00:27:30.587 [2024-07-25 14:54:50.725278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.725288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.725530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.725540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.725964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.725974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.726272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.726282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.726732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.726742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.727125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.727135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.727500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.727511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.727864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.727874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.728230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.728240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.728673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.728683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.729072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.729083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.729607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.729618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.730056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.730066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.730491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.730501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.730932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.730943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.731373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.731385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.731758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.731768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.732078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.732088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.732471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.732480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.732923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.732934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.733299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.733309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.733662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.733672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.734041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.734056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.734421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.734431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.734852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.734862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.735324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.735335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.735763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.735773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.736141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.736151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.736783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.736793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.737152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.737163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.737572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.737583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.737956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.737966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.738383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.738393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.738773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.738783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.739192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.588 [2024-07-25 14:54:50.739203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.588 qpair failed and we were unable to recover it. 00:27:30.588 [2024-07-25 14:54:50.739744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.739755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.740142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.740154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.740312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.740322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.740781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.740791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.741146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.741157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.741502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.741512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.741923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.741933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.742284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.742294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.742491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.742501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.742708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.742718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.743080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.743091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.743504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.743513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.743826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.743836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.744264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.744275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.744643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.744654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.745022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.745032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.745396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.745406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.745770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.746129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.746140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.746515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.746525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.746972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.746982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.747238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.747248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.747573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.747584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.747953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.747963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.748206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.748216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.748528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.748537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.748956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.749402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.749413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.749858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.749868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.750308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.750319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.750735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.750745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.751114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.751125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.751487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.751498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.751951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.751961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.752385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.752396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.752775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.589 [2024-07-25 14:54:50.752785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.589 qpair failed and we were unable to recover it. 00:27:30.589 [2024-07-25 14:54:50.753053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.753064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.753425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.753435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.753739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.753749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.754248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.754258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.754677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.754688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.755065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.755078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.755401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.755411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.755783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.755793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.756157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.756168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.756422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.756432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.756796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.756807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.757038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.757054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.757414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.757425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.757802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.757812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.758033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.758048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.758410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.758420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.758840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.758850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.759264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.759275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.759697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.759707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.760097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.760108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.760530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.760541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.760900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.760910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.761410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.761420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.761847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.761857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.762220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.762230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.762592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.762602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.762968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.762978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.763400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.763410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.763838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.763848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.764302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.764312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.764678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.764688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.764859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.764869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.765255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.765266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.765626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.765636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.765993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.766003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.590 [2024-07-25 14:54:50.766445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.590 [2024-07-25 14:54:50.766456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.590 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.766883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.766893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.767313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.767323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.767679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.767689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.768177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.768188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.768437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.768447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.768899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.768909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.769298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.769308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.769689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.769699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.770122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.770133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.770518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.770529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.770886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.770896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.771373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.771383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.771736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.771746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.772111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.772121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.772479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.772489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.772908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.772918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.773397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.773408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.773759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.774131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.774142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.774381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.774391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.774824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.774834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.775182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.775192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.775556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.775935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.775945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.776555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.591 [2024-07-25 14:54:50.776914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.591 [2024-07-25 14:54:50.776924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.591 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.777367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.777378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.777696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.777706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.778079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.778089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.778462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.778472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.778898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.779496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.779507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.779926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.779936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.780413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.780424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.780790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.780801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.781286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.781297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.781728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.781739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.782150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.782161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.782521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.782531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.782894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.782904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.783271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.783282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.783645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.783656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.784019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.784029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.784451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.784462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.784897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.784907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.785346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.785356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.785778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.785788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.786290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.786301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.786669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.786680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.787104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.787117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.787546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.787557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.787984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.787994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.788368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.788380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.788796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.788807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.789226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.789237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.789607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.789617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.790038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.790054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.790478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.790489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.790927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.790938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.791344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.791355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.791773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.791784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.592 qpair failed and we were unable to recover it. 00:27:30.592 [2024-07-25 14:54:50.792143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.592 [2024-07-25 14:54:50.792153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.792595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.792605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.792975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.792985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.793346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.793357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.793710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.793720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.794133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.794144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.794815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.794825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.795189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.795200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.795574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.795583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.796009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.796019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.796465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.796475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.796832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.796842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.797285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.797295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.797669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.797680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.798117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.798128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.798489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.798499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.798856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.798865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.799528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.799539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.799960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.799970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.800330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.800341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.800700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.800709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.801141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.801152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.801510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.801520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.801941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.801952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.802306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.802317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.802735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.802746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.803129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.803140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.803584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.803595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.803838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.803850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.804329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.804340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.804716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.804726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.805076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.805087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.805478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.593 [2024-07-25 14:54:50.805488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.593 qpair failed and we were unable to recover it. 00:27:30.593 [2024-07-25 14:54:50.805906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.805916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.806344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.806567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.806578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.806946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.806956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.807333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.807343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.807717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.808144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.808155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.808518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.808528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.808886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.808897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.809346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.809356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.809732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.809742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.810108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.810119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.810623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.810634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.811120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.811132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.811356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.811366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.811527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.811537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.811971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.811981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.812348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.812358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.812713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.812723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.813073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.813084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.813429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.813439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.813860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.813872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.814303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.814314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.814670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.814680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.815113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.815124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.815472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.815483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.815911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.815921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.816382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.816773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.816783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.816930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.816941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.817303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.817314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.817733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.817744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.818165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.818176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.818601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.818611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.818990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.819001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.819356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.594 [2024-07-25 14:54:50.819370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.594 qpair failed and we were unable to recover it. 00:27:30.594 [2024-07-25 14:54:50.819731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.819741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.820170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.820181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.820600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.820610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.820971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.820982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.821403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.821413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.821869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.821879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.822252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.822262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.822647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.822657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.823016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.823026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.823453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.823464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.823817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.823827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.824191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.824201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.824557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.824567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.824999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.825010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.825510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.825521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.825876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.825886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.826364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.826374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.826727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.826738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.827163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.827173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.827603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.827613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.827968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.827978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.828391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.828402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.828633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.828643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.829124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.829134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.829564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.829574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.830050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.830060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.830437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.830447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.830804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.830814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.831168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.831179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.831530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.831540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.831967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.831977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.832346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.832358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.832728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.832738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.833237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.833247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.595 qpair failed and we were unable to recover it. 00:27:30.595 [2024-07-25 14:54:50.833434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.595 [2024-07-25 14:54:50.833443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.833819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.833829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.834428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.834439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.834884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.834894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.835319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.835330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.835761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.835773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.836186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.836197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.836572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.836582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.836951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.836973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.837377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.837389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.837758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.837769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.838131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.838142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.838511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.838522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.838711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.838724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.839109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.839123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.839529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.839540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.839740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.839751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.840240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.840251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.840702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.840712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.841177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.841187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.841720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.841733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.842107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.842117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.842619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.596 [2024-07-25 14:54:50.843007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.596 [2024-07-25 14:54:50.843018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.596 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.844125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.844155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b0000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.844660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.844701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.845272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.845308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.845494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.845510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.845894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.845909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.846285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.846300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.846797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.846811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.847245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.847260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f82b8000b90 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.847796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.847815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.848123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.848139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.848583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.848597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.849093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.849108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.849535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.849549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.850047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.850062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.850657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.850671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.851110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.851124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.851574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.851588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.851808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.851822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.852263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.852277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.852733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.852747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.853208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.853223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.853673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.853687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.854069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.854083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.854571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.854585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.854961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.854975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.855355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.855369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.855741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.855755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.856159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.856173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.856528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.856542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.867 qpair failed and we were unable to recover it. 00:27:30.867 [2024-07-25 14:54:50.856977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.867 [2024-07-25 14:54:50.856991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.857261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.857276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.857700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.857714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.858151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.858166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.858525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.858538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.858909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.858923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.859373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.859390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.859871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.859884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.860262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.860276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.860656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.860670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.861135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.861149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.861455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.861952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.861965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.862405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.862419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.862782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.862796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.863157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.863171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.863601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.863615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.863978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.863992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.864379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.864393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.864781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.865191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.865205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.865792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.865806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.866183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.866197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.866638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.866652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.867047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.867060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.867485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.867499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.867885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.867899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.868406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.868420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.868798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.868811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.869250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.869264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.869630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.869644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.870079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.868 [2024-07-25 14:54:50.870093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.868 qpair failed and we were unable to recover it. 00:27:30.868 [2024-07-25 14:54:50.870534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.870547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.870920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.870936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.871325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.871339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.871702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.871716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.872156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.872174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.872543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.872557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.872913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.872927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.873363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.873840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.873854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.874316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.874330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.874607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.874621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.875053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.875067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.875449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.875462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.875829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.875843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.876435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.876449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.876834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.876847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.877224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.877238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.877662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.877676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.878109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.878123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.878530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.878543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.878981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.878995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.879368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.879382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.879820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.879834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.880322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.880336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.880700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.880714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.881151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.881166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.881327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.881341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.881693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.881707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.882085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.882099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.882535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.882770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.882784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.883171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.883186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.883563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.883975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.883989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.884420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.869 [2024-07-25 14:54:50.884434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.869 qpair failed and we were unable to recover it. 00:27:30.869 [2024-07-25 14:54:50.884804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.884819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.885241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.885255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.885688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.885701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.886069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.886084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.886466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.886480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.886965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.886979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.887356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.887370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.887805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.887821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.888202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.888217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.888643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.889082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.889097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.889466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.889479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.889979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.889992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.890375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.890390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.890886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.890900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.891267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.891281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.891679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.891693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.892130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.892144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.892629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.892642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.893080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.893094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.893474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.893487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.893922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.893936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.894373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.894387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.894747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.894761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.895273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.895288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.895722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.895735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.896197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.896212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.896579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.896593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.896967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.896982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.897419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.897434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.897879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.897893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.898323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.898337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.898822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.898836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.899091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.899106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.870 [2024-07-25 14:54:50.899555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.870 [2024-07-25 14:54:50.899569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.870 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.899949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.899965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.900351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.900365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.900800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.900814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.901558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.901575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.901962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.901976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.902353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.902367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.902751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.902765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.903143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.903533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.903547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.903907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.903922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.904368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.904383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.904748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.904764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.905144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.905159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.905534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.905548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.905919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.905933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.906372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.906387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.906765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.906780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.907169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.907183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.907558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.907572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.907945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.907959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.908337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.908351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.908789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.908803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.909182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.909197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.909572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.909586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.909972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.909986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.910575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.910589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.911011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.911025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.911186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.911202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.911639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.911653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.912083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.912098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.912487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.912501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.912925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.912939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.913344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.913358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.913553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.913567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.913946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.913960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.871 [2024-07-25 14:54:50.914382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.871 [2024-07-25 14:54:50.914396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.871 qpair failed and we were unable to recover it. 00:27:30.872 [2024-07-25 14:54:50.914789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.872 [2024-07-25 14:54:50.914803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.872 qpair failed and we were unable to recover it. 00:27:30.872 [2024-07-25 14:54:50.915174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.872 [2024-07-25 14:54:50.915189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.872 qpair failed and we were unable to recover it. 00:27:30.872 [2024-07-25 14:54:50.915633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.872 [2024-07-25 14:54:50.915647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.872 qpair failed and we were unable to recover it. 00:27:30.872 [2024-07-25 14:54:50.916069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.872 [2024-07-25 14:54:50.916083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.872 qpair failed and we were unable to recover it. 00:27:30.872 [2024-07-25 14:54:50.916515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.872 [2024-07-25 14:54:50.916532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.916963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.916977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.917492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.917506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.917965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.917979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.918413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.918427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.918715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.918729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.919125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.919139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.919578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.919591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.920016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.920029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.920476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.920491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.920913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.920927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.921295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.921309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.921798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.921812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.922186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.922200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.922639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.922653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.923169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.923183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.923566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.923580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.923953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.923966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.924389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.924403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.924777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.924791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.925214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.925228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.925598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.925612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.925985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.925998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.926489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.926504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.926993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.927007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.927374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.927388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.927787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.927801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.928241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.928701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.928715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.929164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.929179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.929558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.929571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.929989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.930003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.930435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.930448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.930884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.930897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.873 [2024-07-25 14:54:50.931280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.873 [2024-07-25 14:54:50.931293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.873 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.931800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.931814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.932250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.932265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.932718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.932731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.933149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.933375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.933389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.933824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.933838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.934216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.934231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.934665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.934678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.935111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.935125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.935487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.935501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.935922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.935936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.936364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.936377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.936762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.936775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.937220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.937234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.937597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.937612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.938037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.938070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.938429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.938443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.938876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.938890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.939320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.939334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.939694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.939708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.940141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.940155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.940590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.940603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.941032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.941052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.941417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.941431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.942104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.942118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.942486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.942500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.942750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.942763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.943255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.943269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.943668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.943682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.944146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.944161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.944656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.944670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.945111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.945125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.945557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.945570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.945773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.945789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.874 [2024-07-25 14:54:50.946163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.874 [2024-07-25 14:54:50.946178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.874 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.946606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.946619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.947107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.947122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.947501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.947514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.947952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.947966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.948347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.948361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.948724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.948738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.949165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.949562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.949576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.949933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.949947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.950388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.950402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.950833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.950847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.951227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.951241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.951679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.951693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.952127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.952141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.952569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.952583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.952964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.952977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.953365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.953379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.953796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.953810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.954195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.954209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.954577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.954591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.954952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.954966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.955362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.955376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.955732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.955746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.956232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.956246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.956663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.956677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.957123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.957136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.957491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.957922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.957936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.958310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.958324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.958762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.958776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.959150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.959164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.959583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.959598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.960138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.960152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.875 [2024-07-25 14:54:50.960404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.875 [2024-07-25 14:54:50.960418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.875 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.960904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.960918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.961362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.961823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.961837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.962225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.962240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.962676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.962690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.963141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.963158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.963521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.963536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.963915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.963929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.964126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.964141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.964571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.964585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.965003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.965017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.965463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.965478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.965861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.965876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.966251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.966266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.966635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.966649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.967038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.967060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.967483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.967497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.967870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.967884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.968335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.968349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.968735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.968750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.969179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.969193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.969698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.969712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.970145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.970159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.970523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.970537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.970921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.970935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.971382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.971397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.971785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.971799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.972240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.972254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.972761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.972775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.973149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.973164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.973598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.973612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.974041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.974059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.974486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.974502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.974725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.974739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.975106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.975120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.975497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.876 [2024-07-25 14:54:50.975944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.876 [2024-07-25 14:54:50.975958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.876 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.976331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.976346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.976724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.976738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.977111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.977125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.977557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.977571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.977962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.977976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.978350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.978365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.978733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.978748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.979135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.979150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.979537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.979551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.980062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.980076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.980323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.980336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.980713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.980727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.981106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.981121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.981489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.981503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.981930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.981944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.982328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.982343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.982579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.982593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.982973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.982986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.983480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.983494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.983863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.983877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.984310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.984325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.984755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.984770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.985160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.985174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.985546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.985560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.985994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.986008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.986371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.986385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.986702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.986716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.987083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.987097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.987530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.987546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.987928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.987942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.877 [2024-07-25 14:54:50.988373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.877 [2024-07-25 14:54:50.988387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.877 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.988561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.988574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.989061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.989453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.989467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.989852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.989866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.990355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.990370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.990747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.990763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.991256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.991270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.991694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.991708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.992090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.992104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.992466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.992911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.992925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.993185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.993199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.993620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.993635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.994087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.994101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.994615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.994629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.995005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.995019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.995399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.995414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.995802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.995817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.996315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.996329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.996703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.996716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.997228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.997242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.997596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.997610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.998039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.998059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.998436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.998450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.998828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.998842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.999222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.999236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:50.999612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:50.999625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.000001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.000015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.000480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.000495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.000919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.000933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.001374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.001389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.001807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.001821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.002203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.002219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.002597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.878 [2024-07-25 14:54:51.002610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.878 qpair failed and we were unable to recover it. 00:27:30.878 [2024-07-25 14:54:51.003047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.003061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.003551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.003565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.003990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.004003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.004436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.004450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.004836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.004850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.005227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.005242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.005631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.005645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.006268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.006282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.006658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.006672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.007051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.007066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.007444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.007458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.007908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.008371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.008386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.008687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.008701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.009134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.009149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.009588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.009602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.010050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.010064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.010522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.010536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.010972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.010986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.011441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.011455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.011873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.011886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.012263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.012277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.012708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.012722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.013171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.013186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.013344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.013358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.013736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.013750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.014176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.014190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.014555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.014569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.014923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.014937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.015373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.015387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.015871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.015885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.016223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.016237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.016604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.016618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.879 qpair failed and we were unable to recover it. 00:27:30.879 [2024-07-25 14:54:51.017133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.879 [2024-07-25 14:54:51.017147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.017596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.017610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.017973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.017987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.018619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.018633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.018896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.018909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.019352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.019366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.019807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.019824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.020191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.020205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.020693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.020706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.021086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.021101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.021534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.021548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.021927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.021940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.022428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.022442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.022864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.022878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.023212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.023225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.023738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.023753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.024137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.024151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.024526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.024540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.024965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.024979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.025336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.025350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.025855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.025869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.026240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.026254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.026707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.026720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.027141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.027155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.027650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.027664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.027839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.027853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.028224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.028238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.028664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.028678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.029123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.029138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.029571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.029585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.030006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.030025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.030416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.030430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.030866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.030880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.031303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.031320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.031694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.880 [2024-07-25 14:54:51.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.880 qpair failed and we were unable to recover it. 00:27:30.880 [2024-07-25 14:54:51.032149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.032163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.032587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.032602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.032977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.032991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.033423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.033437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.033868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.033882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.034340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.034355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.034783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.034798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.035198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.035213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.035571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.035584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.036038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.036060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.036451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.036466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.036908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.036922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.037370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.037385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.037749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.037763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.038190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.038211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.038646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.038660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.039086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.039100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.039472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.039486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.039843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.039857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.040317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.040331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.040772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.041042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.041070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.041561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.041576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.042072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.042094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.042543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.042557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.042942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.042955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.043340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.043355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.043736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.043750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.044135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.044150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.044654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.044668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.045041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.045059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.045421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.045435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.045860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.045874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.046265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.881 [2024-07-25 14:54:51.046280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.881 qpair failed and we were unable to recover it. 00:27:30.881 [2024-07-25 14:54:51.046638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.046652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.047174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.047189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.047612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.047626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.048065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.048079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.048450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.048464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.048890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.048907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.049393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.049408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.049943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.049956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.050328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.050342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.050663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.050677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.051119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.051133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.051568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.051582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.051974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.051987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.052420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.052434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.052799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.052812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.053184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.053198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.053711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.053725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.053950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.054350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.054364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.054743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.054757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.055068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.055083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.055505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.055519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.055887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.055901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.056300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.056314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.056732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.056746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.057121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.057135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.057749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.057763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.058445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.058460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.058894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.058908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.059295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.059309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.059735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.059749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.060184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.060198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.060561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.060575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.061075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.061090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.061630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.061644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.882 qpair failed and we were unable to recover it. 00:27:30.882 [2024-07-25 14:54:51.062084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.882 [2024-07-25 14:54:51.062099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.062606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.062620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.063058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.063073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.063567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.063581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.064093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.064108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.064584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.064598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.064968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.065424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.065438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.065802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.065816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.065978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.065991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.066508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.066525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.066896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.066911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.067420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.067435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.067856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.067870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.068245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.068259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.068691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.068705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.069141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.069155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.069539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.069552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.069978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.069993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.070415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.070430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.070805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.070819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.071247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.071261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.071626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.071641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.072005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.072019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.072722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.072739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.073161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.073176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.073673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.074082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.074097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.074449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.074463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.074903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.075542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.075556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.075924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.883 [2024-07-25 14:54:51.075938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.883 qpair failed and we were unable to recover it. 00:27:30.883 [2024-07-25 14:54:51.076299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.076314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.076680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.076694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.077087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.077102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.077538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.077552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.077909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.077923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.078298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.078313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.078755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.078771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.079202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.079216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.079383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.079397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.079823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.079837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.080213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.080229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.080729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.080743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.081139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.081153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.081533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.081547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.081925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.081940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.082309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.082324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.082544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.082920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.082934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.083322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.083337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.083598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.083613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.084038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.084057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.084424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.084438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.084835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.084850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.085218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.085234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.085601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.085615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.086041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.086062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.086421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.086436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.086814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.086828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.087064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.087079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.087713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.087728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.088179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.088194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.088577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.088591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.088961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.088975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.089417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.884 [2024-07-25 14:54:51.089432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.884 qpair failed and we were unable to recover it. 00:27:30.884 [2024-07-25 14:54:51.089875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.089889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.090256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.090270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.090645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.090660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.091048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.091063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.091491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.091505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.091662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.091677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.092034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.092058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.092487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.092501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.093173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.093188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.093356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.093369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.093788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.093802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.094254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.094269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.094694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.094709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.095157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.095174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.095548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.095562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.096017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.096032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.096470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.096484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.096857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.096871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.097427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.097441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.097947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.097962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.098380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.098395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.098821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.098837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.099221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.099235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.099609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.099624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.100060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.100075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.100531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.100545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.100966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.100981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.101428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.101443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.101699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.101714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.102160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.102174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.102692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.102707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.103145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.103160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.103537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.103551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.103989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.104003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.104440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.885 [2024-07-25 14:54:51.104454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.885 qpair failed and we were unable to recover it. 00:27:30.885 [2024-07-25 14:54:51.104899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.104914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.105404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.105418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.105843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.105857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.106227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.106242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.106666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.106679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.107063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.107080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.107502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.107516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.107878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.107892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.108258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.108272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.108525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.108539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.108687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.108700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.109133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.109147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.109608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.109622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.110106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.110120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.110490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.110504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.110869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.110883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.111318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.111332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.111819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.111833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.112193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.112207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.112640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.112654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.113138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.113153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.113620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.113634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.113999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.114014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.114325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.114340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.114760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.115233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.115247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.115601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.115615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.116126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.116140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.116579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.116594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.116962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.116976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.117398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.117412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.117896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.117911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.118419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.118433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.118810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.118824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.119345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.119359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.886 qpair failed and we were unable to recover it. 00:27:30.886 [2024-07-25 14:54:51.119864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.886 [2024-07-25 14:54:51.119879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.120257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.120272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.120586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.120601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.121022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.121036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.121525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.121540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.121983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.121997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.122428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.122442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.122898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.122912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.123342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.123356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.123576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.123590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.124022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.124547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.124565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.124981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.124995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.125420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.125435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.125854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.125868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.126118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.126133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.126642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.126656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.127181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.127195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.127710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.127724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.128175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.128190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.128613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.128627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.129053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.129067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.129239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.129253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.129630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.129643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.130092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.130106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.130558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.130571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.131078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.131092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.131478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.131492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.131928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.131941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.132450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.132464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.133001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.133015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.133439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.133454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.133616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.133630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.887 [2024-07-25 14:54:51.134054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.887 [2024-07-25 14:54:51.134068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.887 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.134575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.134589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.135095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.135109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.135613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.135627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.136095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.136516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.136531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.137039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.137057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.137570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.137583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.138069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.138082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.138511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.138524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.138980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.139378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.139392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.139752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.139766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.140236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.140250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.140471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.140485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.140932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.140945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.141468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.141482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.141915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.141928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.142365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.142379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.142889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.142902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.143267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.143281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.143789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.143803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.144085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.144099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.144479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.144493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.144981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.144995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.145440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.145454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.145941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.146466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.146480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.146896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.146909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.888 [2024-07-25 14:54:51.147327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.888 [2024-07-25 14:54:51.147341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.888 qpair failed and we were unable to recover it. 00:27:30.889 [2024-07-25 14:54:51.147848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.889 [2024-07-25 14:54:51.147862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.889 qpair failed and we were unable to recover it. 00:27:30.889 [2024-07-25 14:54:51.148164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.889 [2024-07-25 14:54:51.148177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.889 qpair failed and we were unable to recover it. 00:27:30.889 [2024-07-25 14:54:51.148614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.889 [2024-07-25 14:54:51.148627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:30.889 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.148998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.149012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.149433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.149447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.149901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.149915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.150351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.150366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.150852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.150866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.151298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.151311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.151688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.151701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.152211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.152712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.152726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.153006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.153020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.153484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.153498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.153939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.153953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.154189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.154203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.154712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.154728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.155214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.155229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.153 [2024-07-25 14:54:51.155714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.153 [2024-07-25 14:54:51.155727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.153 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.156006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.156019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.156507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.156522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.156942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.156955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.157393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.157891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.157905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.158413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.158426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.158922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.158935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.159430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.159444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.159813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.159827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.160334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.160776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.161314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.161328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.154 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:31.154 [2024-07-25 14:54:51.161757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.161772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.154 [2024-07-25 14:54:51.162200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.162214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.154 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.154 [2024-07-25 14:54:51.162647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.162661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.163096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.163110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.163617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.163632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.164084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.164098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.164470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.164484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.164915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.164929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.165439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.165454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.165884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.165898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.166339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.166353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.166844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.166858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.167343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.167357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.167822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.167837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.168261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.168275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.168715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.168730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.169215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.169230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.169665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.169679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.170193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.170207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.170716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.170730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.171229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.171243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.171627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.171641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.154 [2024-07-25 14:54:51.172154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.154 [2024-07-25 14:54:51.172170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.154 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.172653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.172668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.173065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.173080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.173509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.173523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.173951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.173964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.174384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.174398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.174834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.174849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.175222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.175236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.175606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.175620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.176368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.176383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.176822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.176836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.177144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.177611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.177625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.178073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.178087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.178571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.178585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.179018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.179032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.179527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.179542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.179907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.179922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.180243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.180651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.180665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.181093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.181108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.181313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.181327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.181623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.181637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.182122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.182136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.182449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.182463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.182711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.182725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.183100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.183115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.183603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.183617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.184072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.184086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.184470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.184487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.184870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.184885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.185255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.185270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.185655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.185669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.186052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.186067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.186812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.186827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.187209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.187223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.187643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.187658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.155 [2024-07-25 14:54:51.188092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.155 [2024-07-25 14:54:51.188106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.155 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.188533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.188547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.188924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.188938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.189371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.189385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.189804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.189817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.190308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.190322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.190679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.190693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.191240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.191254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.191703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.191717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.192141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.192155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.192430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.192866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.192880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.193244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.193258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.193640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.193654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.194110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.194124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.194498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.194512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.194866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.194880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.195329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.195344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.195714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.195728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.196101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.196116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.156 [2024-07-25 14:54:51.196427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.196444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.156 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.156 [2024-07-25 14:54:51.196873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.196889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.156 [2024-07-25 14:54:51.197337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.197352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.197805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.197819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.198199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.198213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.198661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.198674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.199060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.199074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.199438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.199452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.199883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.199897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.200329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.200343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.200774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.200789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.201152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.201169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.201654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.201668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.202112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.202126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.202506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.202520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.202673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.156 [2024-07-25 14:54:51.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.156 qpair failed and we were unable to recover it. 00:27:31.156 [2024-07-25 14:54:51.203124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.203138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.203569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.203583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.204018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.204032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.204411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.204425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.204875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.204889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.205265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.205280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.205703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.205717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.206203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.206217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.206659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.206674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.206916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.206930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.207421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.207436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.207731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.207746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.208116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.208131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.208511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.208526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.208690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.208704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.209145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.209160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.209616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.209631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.210007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.210023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.210483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.210499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.210935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.210951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.211439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.211455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.211823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.211838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.212279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.212297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.212747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.212764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.213141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.213157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.213669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.213686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.214204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.214222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.214660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.214674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.215101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.215116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.215600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.215614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.216038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.216055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.216288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.216302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.216678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.216692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 Malloc0 00:27:31.157 [2024-07-25 14:54:51.217053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 [2024-07-25 14:54:51.217577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.217592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.157 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.157 [2024-07-25 14:54:51.218100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.218119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.157 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.157 [2024-07-25 14:54:51.218557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.157 [2024-07-25 14:54:51.218572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.157 qpair failed and we were unable to recover it. 00:27:31.158 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.158 [2024-07-25 14:54:51.218995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.219009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.219375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.219389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.219822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.219836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.220262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.220276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.220707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.220721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.221149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.221162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.221531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.221545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.222055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.222069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.222503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.222517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.223029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.223046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.223416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.223430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.223943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.223956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.224383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.224396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.224427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.158 [2024-07-25 14:54:51.224813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.224828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.225334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.225348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.225775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.225788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.226238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.226252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.226760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.227162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.227176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.227664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.227678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.228131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.228144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.228517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.228530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.229053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.229521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.229534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.229955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.229972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.230408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.230422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.230866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.230879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.231369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.231383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.231808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.231822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.232250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.158 [2024-07-25 14:54:51.232264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.158 qpair failed and we were unable to recover it. 00:27:31.158 [2024-07-25 14:54:51.232694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.232708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.159 [2024-07-25 14:54:51.233130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.233148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.159 [2024-07-25 14:54:51.233509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.233523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.159 [2024-07-25 14:54:51.233954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.233968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.159 [2024-07-25 14:54:51.234472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.234486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.234917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.234931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.235371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.235385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.235871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.236381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.236395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.236848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.236861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.237293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.237307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.237674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.237687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.238196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.238210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.238629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.238643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.239012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.239026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.239488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.239502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.239934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.239947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.240367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.240381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.240832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.240846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.159 [2024-07-25 14:54:51.241354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.241371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.159 [2024-07-25 14:54:51.241881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.241895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.159 [2024-07-25 14:54:51.242354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.242368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.242854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.242868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.243377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.243391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.243826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.243840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.244277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.244291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.244799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.244812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.245234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.245248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.245751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.245764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.246201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.246215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.246719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.246733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.247166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.247180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.247617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.247630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.248138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.248152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.159 [2024-07-25 14:54:51.248511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.159 [2024-07-25 14:54:51.248525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.159 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.248890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.248904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 [2024-07-25 14:54:51.249356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.249370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 [2024-07-25 14:54:51.249868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.249882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.160 [2024-07-25 14:54:51.250368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.250383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.250832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.250846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.251335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.251348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.251784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.251798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.252248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.252262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.252651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.160 [2024-07-25 14:54:51.252746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-25 14:54:51.252760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3ed0 with addr=10.0.0.2, port=4420 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.255111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.255324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.255353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.255364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.255373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.255402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.160 [2024-07-25 14:54:51.265064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 [2024-07-25 14:54:51.265242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.265266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.265276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.265284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.265305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 14:54:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2485932 00:27:31.160 [2024-07-25 14:54:51.274972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.275129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.275149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.275156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.275162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.275179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.285008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.285158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.285179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.285186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.285192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.285209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.295047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.295204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.295224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.295232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.295239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.295257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.305075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.305222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.305241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.305249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.305255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.305272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.315031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.315178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.315197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.315204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.315210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.315228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.325119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.325268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.160 [2024-07-25 14:54:51.325287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.160 [2024-07-25 14:54:51.325294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.160 [2024-07-25 14:54:51.325300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.160 [2024-07-25 14:54:51.325321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.160 qpair failed and we were unable to recover it. 00:27:31.160 [2024-07-25 14:54:51.335148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.160 [2024-07-25 14:54:51.335300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.335319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.335326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.335332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.335349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.345195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.345342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.345361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.345368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.345374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.345391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.355196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.355342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.355361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.355368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.355374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.355391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.365235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.365383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.365402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.365409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.365416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.365433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.375268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.375415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.375437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.375444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.375450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.375466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.385313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.385455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.385474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.385481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.385487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.385505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.395322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.395466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.395485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.395491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.395497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.395514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.405349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.405496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.405515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.405521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.405527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.405544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.415374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.415517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.415536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.415543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.415549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.415569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.425403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.425548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.425567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.425573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.425579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.425595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.161 [2024-07-25 14:54:51.435350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.161 [2024-07-25 14:54:51.435493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.161 [2024-07-25 14:54:51.435511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.161 [2024-07-25 14:54:51.435518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.161 [2024-07-25 14:54:51.435524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.161 [2024-07-25 14:54:51.435541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.161 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.445366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.445514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.445533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.445540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.445546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.445563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.455489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.455633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.455652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.455659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.455665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.455682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.465639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.465789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.465815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.465822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.465828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.465845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.475609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.475750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.475768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.475775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.475781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.475798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.485600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.485751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.485769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.485776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.485782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.485799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.495630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.495780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.495798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.495806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.495812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.495828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.423 [2024-07-25 14:54:51.505601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.423 [2024-07-25 14:54:51.505788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.423 [2024-07-25 14:54:51.505807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.423 [2024-07-25 14:54:51.505814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.423 [2024-07-25 14:54:51.505824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.423 [2024-07-25 14:54:51.505840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.423 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.515669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.515811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.515830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.515837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.515843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.515859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.525717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.525876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.525895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.525902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.525908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.525924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.535727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.535887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.535906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.535913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.535918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.535935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.545752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.545899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.545918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.545925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.545930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.545948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.555774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.555921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.555939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.555946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.555952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.555969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.565821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.565990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.566009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.566016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.566022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.566039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.575796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.575944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.575963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.575970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.575976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.575993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.585850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.586014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.586033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.586040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.586052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.586069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.595910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.596061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.596080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.596086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.596096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.596113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.605920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.606075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.606094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.606101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.606107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.606124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.615939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.616096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.616115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.616122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.616127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.616144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.625962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.626115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.626135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.626141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.626148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.424 [2024-07-25 14:54:51.626164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.424 qpair failed and we were unable to recover it. 00:27:31.424 [2024-07-25 14:54:51.635990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.424 [2024-07-25 14:54:51.636141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.424 [2024-07-25 14:54:51.636161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.424 [2024-07-25 14:54:51.636169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.424 [2024-07-25 14:54:51.636176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.636193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.646032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.646191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.646211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.646218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.646224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.646241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.656085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.656251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.656270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.656276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.656283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.656299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.666087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.666228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.666246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.666253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.666259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.666275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.676119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.676286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.676305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.676312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.676318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.676334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.686140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.686287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.686305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.686312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.686321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.686338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.696176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.696322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.696341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.696347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.696353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.696370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.425 [2024-07-25 14:54:51.706135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.425 [2024-07-25 14:54:51.706287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.425 [2024-07-25 14:54:51.706306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.425 [2024-07-25 14:54:51.706313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.425 [2024-07-25 14:54:51.706319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.425 [2024-07-25 14:54:51.706337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.425 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.716240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.716387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.716406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.716413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.716420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.686 [2024-07-25 14:54:51.716437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.686 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.726267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.726413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.726430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.726437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.726443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.686 [2024-07-25 14:54:51.726460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.686 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.736294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.736449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.736467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.736475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.736481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.686 [2024-07-25 14:54:51.736497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.686 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.746318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.746464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.746482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.746489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.746495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.686 [2024-07-25 14:54:51.746511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.686 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.756340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.756482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.756501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.756507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.756514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.686 [2024-07-25 14:54:51.756531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.686 qpair failed and we were unable to recover it. 00:27:31.686 [2024-07-25 14:54:51.766377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.686 [2024-07-25 14:54:51.766520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.686 [2024-07-25 14:54:51.766538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.686 [2024-07-25 14:54:51.766545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.686 [2024-07-25 14:54:51.766551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.766568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.776401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.776567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.776586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.776596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.776602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.776618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.786416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.786563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.786581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.786588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.786594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.786610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.796435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.796590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.796608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.796615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.796621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.796637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.806495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.806642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.806660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.806667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.806673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.806690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.816515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.816666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.816685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.816692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.816698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.816715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.826547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.826695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.826713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.826720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.826726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.826743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.836574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.836723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.836741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.836748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.836754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.836770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.846615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.846764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.846782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.846789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.846795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.846812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.856627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.856777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.856796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.856803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.856809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.856824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.866651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.866796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.866814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.866825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.866831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.866848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.876683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.876869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.876888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.876895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.876902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.876918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.886697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.886850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.886868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.886875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.886882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.886898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.896726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.687 [2024-07-25 14:54:51.896872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.687 [2024-07-25 14:54:51.896891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.687 [2024-07-25 14:54:51.896898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.687 [2024-07-25 14:54:51.896904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.687 [2024-07-25 14:54:51.896922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.687 qpair failed and we were unable to recover it. 00:27:31.687 [2024-07-25 14:54:51.906784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.906928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.906946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.906953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.906959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.906976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.916787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.916932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.916950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.916958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.916964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.916980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.926835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.927214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.927232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.927239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.927245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.927262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.936830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.936979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.936997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.937004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.937010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.937027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.946881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.947026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.947052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.947060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.947066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.947082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.956909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.957056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.957075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.957086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.957092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.957109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.966876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.967026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.967050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.967057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.967063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.967079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.688 [2024-07-25 14:54:51.976950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.688 [2024-07-25 14:54:51.977110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.688 [2024-07-25 14:54:51.977128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.688 [2024-07-25 14:54:51.977135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.688 [2024-07-25 14:54:51.977141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.688 [2024-07-25 14:54:51.977158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.688 qpair failed and we were unable to recover it. 00:27:31.949 [2024-07-25 14:54:51.987015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.949 [2024-07-25 14:54:51.987370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.949 [2024-07-25 14:54:51.987388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.949 [2024-07-25 14:54:51.987395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.949 [2024-07-25 14:54:51.987402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.949 [2024-07-25 14:54:51.987417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.949 qpair failed and we were unable to recover it. 00:27:31.949 [2024-07-25 14:54:51.997027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.949 [2024-07-25 14:54:51.997196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.949 [2024-07-25 14:54:51.997215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.949 [2024-07-25 14:54:51.997222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.949 [2024-07-25 14:54:51.997228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.949 [2024-07-25 14:54:51.997245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.949 qpair failed and we were unable to recover it. 00:27:31.949 [2024-07-25 14:54:52.007059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.949 [2024-07-25 14:54:52.007204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.949 [2024-07-25 14:54:52.007222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.949 [2024-07-25 14:54:52.007229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.949 [2024-07-25 14:54:52.007235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.949 [2024-07-25 14:54:52.007252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.949 qpair failed and we were unable to recover it. 00:27:31.949 [2024-07-25 14:54:52.017019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.949 [2024-07-25 14:54:52.017174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.949 [2024-07-25 14:54:52.017193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.949 [2024-07-25 14:54:52.017200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.949 [2024-07-25 14:54:52.017206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.949 [2024-07-25 14:54:52.017222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.949 qpair failed and we were unable to recover it. 00:27:31.949 [2024-07-25 14:54:52.027111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.949 [2024-07-25 14:54:52.027257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.027275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.027282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.027288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.027305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.037134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.037282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.037300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.037307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.037313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.037330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.047131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.047278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.047300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.047307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.047313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.047330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.057203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.057348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.057366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.057373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.057379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.057396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.067226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.067374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.067393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.067400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.067406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.067423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.077194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.077342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.077361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.077368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.077373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.077390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.087298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.087446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.087464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.087471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.087477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.087493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.097323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.097474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.097493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.097500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.097505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.097522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.107337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.107481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.107500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.107507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.107514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.107531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.117312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.117458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.117477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.117484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.117490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.117508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.127601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.127750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.127769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.127775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.127782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.127799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.137620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.137769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.137792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.137799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.137805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.137821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.147458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.147604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.147623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.950 [2024-07-25 14:54:52.147630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.950 [2024-07-25 14:54:52.147635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.950 [2024-07-25 14:54:52.147652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.950 qpair failed and we were unable to recover it. 00:27:31.950 [2024-07-25 14:54:52.157411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.950 [2024-07-25 14:54:52.157558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.950 [2024-07-25 14:54:52.157577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.157585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.157591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.157608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.167778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.167927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.167945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.167952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.167958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.167975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.177483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.177635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.177654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.177661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.177667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.177687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.187580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.187725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.187743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.187750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.187756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.187773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.197605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.197749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.197768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.197775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.197781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.197798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.207663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.207821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.207839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.207846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.207852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.207869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.217672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.217822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.217841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.217848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.217854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.217871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.227675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.227818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.227839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.227846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.227852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.227868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:31.951 [2024-07-25 14:54:52.237734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.951 [2024-07-25 14:54:52.237878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.951 [2024-07-25 14:54:52.237896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.951 [2024-07-25 14:54:52.237903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.951 [2024-07-25 14:54:52.237910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:31.951 [2024-07-25 14:54:52.237926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.951 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.247735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.247880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.247900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.247907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.247913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.247930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.257781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.257928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.257947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.257953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.257959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.257976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.267836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.267978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.267997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.268004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.268010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.268034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.277835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.277979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.277998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.278005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.278011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.278028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.287871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.288017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.288035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.288082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.288089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.288106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.297823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.297979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.297999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.298007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.298013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.298031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.307919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.308067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.308086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.308093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.308099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.308116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.317979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.318339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.318361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.318368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.213 [2024-07-25 14:54:52.318374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.213 [2024-07-25 14:54:52.318390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.213 qpair failed and we were unable to recover it. 00:27:32.213 [2024-07-25 14:54:52.327926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.213 [2024-07-25 14:54:52.328104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.213 [2024-07-25 14:54:52.328123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.213 [2024-07-25 14:54:52.328130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.328136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.328154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.338009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.338388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.338406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.338413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.338419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.338435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.348039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.348186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.348205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.348212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.348218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.348235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.358081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.358226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.358244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.358251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.358257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.358278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.368111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.368276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.368294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.368301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.368307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.368324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.378121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.378264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.378283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.378290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.378296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.378313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.388195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.388369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.388388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.388395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.388401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.388418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.398208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.398349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.398368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.398374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.398381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.398397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.408210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.408357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.408379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.408386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.408392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.408408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.418182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.418333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.418352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.418359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.418365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.418382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.428209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.428358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.428376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.428383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.428389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.428406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.438233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.438381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.438400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.438407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.438412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.438429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.448275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.448420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.448438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.448445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.448454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.448471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.214 [2024-07-25 14:54:52.458304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.214 [2024-07-25 14:54:52.458455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.214 [2024-07-25 14:54:52.458474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.214 [2024-07-25 14:54:52.458481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.214 [2024-07-25 14:54:52.458487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.214 [2024-07-25 14:54:52.458504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.214 qpair failed and we were unable to recover it. 00:27:32.215 [2024-07-25 14:54:52.468379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.215 [2024-07-25 14:54:52.468522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.215 [2024-07-25 14:54:52.468540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.215 [2024-07-25 14:54:52.468547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.215 [2024-07-25 14:54:52.468553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.215 [2024-07-25 14:54:52.468570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.215 qpair failed and we were unable to recover it. 00:27:32.215 [2024-07-25 14:54:52.478344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.215 [2024-07-25 14:54:52.478488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.215 [2024-07-25 14:54:52.478506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.215 [2024-07-25 14:54:52.478513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.215 [2024-07-25 14:54:52.478520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.215 [2024-07-25 14:54:52.478536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.215 qpair failed and we were unable to recover it. 00:27:32.215 [2024-07-25 14:54:52.488390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.215 [2024-07-25 14:54:52.488540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.215 [2024-07-25 14:54:52.488559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.215 [2024-07-25 14:54:52.488568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.215 [2024-07-25 14:54:52.488577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.215 [2024-07-25 14:54:52.488594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.215 qpair failed and we were unable to recover it. 00:27:32.215 [2024-07-25 14:54:52.498450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.215 [2024-07-25 14:54:52.498605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.215 [2024-07-25 14:54:52.498623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.215 [2024-07-25 14:54:52.498630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.215 [2024-07-25 14:54:52.498636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.215 [2024-07-25 14:54:52.498653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.215 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.508441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.508592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.508611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.508618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.508624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.508640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.518462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.518600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.518619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.518626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.518631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.518648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.528564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.528711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.528729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.528736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.528742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.528758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.538512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.538667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.538686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.538693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.538702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.538719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.548587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.548732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.548750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.548758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.548764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.548781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.558625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.558769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.558788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.558795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.558801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.558818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.568604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.568760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.568779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.568785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.568791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.568808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.578702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.578850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.578869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.578876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.578882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.578898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.588732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.588879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.588898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.588905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.588911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.588927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.598749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.598893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.598911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.598918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.598924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.598941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.608802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.608950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.608969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.608975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.608981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.608998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.618765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.618919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.618937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.477 [2024-07-25 14:54:52.618944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.477 [2024-07-25 14:54:52.618950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.477 [2024-07-25 14:54:52.618967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.477 qpair failed and we were unable to recover it. 00:27:32.477 [2024-07-25 14:54:52.628847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.477 [2024-07-25 14:54:52.628997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.477 [2024-07-25 14:54:52.629015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.629022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.629032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.629056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.638861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.639001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.639019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.639026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.639032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.639055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.648888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.649040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.649065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.649072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.649079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.649095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.658931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.659083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.659102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.659109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.659115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.659132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.668953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.669105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.669123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.669130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.669136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.669153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.678992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.679159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.679177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.679184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.679190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.679207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.689025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.689176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.689194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.689201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.689207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.689223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.699056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.699207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.699225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.699232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.699238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.699254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.709081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.709233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.709251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.709258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.709264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.709282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.719082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.719227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.719246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.719257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.719264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.719281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.729138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.729286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.729305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.729312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.729318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.729335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.739173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.739323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.739342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.739349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.739355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.739371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.749195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.749344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.749363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.749370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.749376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.478 [2024-07-25 14:54:52.749393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.478 qpair failed and we were unable to recover it. 00:27:32.478 [2024-07-25 14:54:52.759221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.478 [2024-07-25 14:54:52.759368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.478 [2024-07-25 14:54:52.759387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.478 [2024-07-25 14:54:52.759393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.478 [2024-07-25 14:54:52.759399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.479 [2024-07-25 14:54:52.759416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.479 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.769260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.769412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.769430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.769437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.769443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.769459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.779278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.779431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.779449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.779456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.779462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.779478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.789306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.789452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.789471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.789477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.789483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.789500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.799343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.799485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.799504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.799511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.799517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.799533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.809386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.809531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.809549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.809560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.809566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.809583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.819410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.819557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.819575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.819582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.819588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.819604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.829400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.829549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.829567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.829574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.829580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.829597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.839447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.839610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.839628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.839635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.839641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.839657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.849519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.849687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.849705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.849712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.849719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.849735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.859435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.740 [2024-07-25 14:54:52.859586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.740 [2024-07-25 14:54:52.859605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.740 [2024-07-25 14:54:52.859612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.740 [2024-07-25 14:54:52.859618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.740 [2024-07-25 14:54:52.859634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.740 qpair failed and we were unable to recover it. 00:27:32.740 [2024-07-25 14:54:52.869542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.869686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.869705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.869712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.869717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.869734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.879564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.879706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.879725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.879732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.879738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.879755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.889606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.889748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.889767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.889773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.889779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.889796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.899610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.899751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.899770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.899780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.899786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.899803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.909630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.909777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.909795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.909802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.909808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.909825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.919671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.919820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.919839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.919846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.919851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.919868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.929738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.929885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.929904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.929910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.929917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.929933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.939742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.939937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.939956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.939963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.939969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.939985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.949728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.949880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.949900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.949906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.949913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.949930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.959791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.959932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.959951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.959958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.959964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.959981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.969830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.969977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.969996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.970002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.970009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.970026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.979838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.979984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.980003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.980010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.980017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.980034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.989863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.741 [2024-07-25 14:54:52.990011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.741 [2024-07-25 14:54:52.990029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.741 [2024-07-25 14:54:52.990040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.741 [2024-07-25 14:54:52.990052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.741 [2024-07-25 14:54:52.990069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.741 qpair failed and we were unable to recover it. 00:27:32.741 [2024-07-25 14:54:52.999890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.742 [2024-07-25 14:54:53.000035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.742 [2024-07-25 14:54:53.000060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.742 [2024-07-25 14:54:53.000067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.742 [2024-07-25 14:54:53.000073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.742 [2024-07-25 14:54:53.000090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.742 qpair failed and we were unable to recover it. 00:27:32.742 [2024-07-25 14:54:53.009928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.742 [2024-07-25 14:54:53.010079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.742 [2024-07-25 14:54:53.010097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.742 [2024-07-25 14:54:53.010104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.742 [2024-07-25 14:54:53.010110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.742 [2024-07-25 14:54:53.010127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.742 qpair failed and we were unable to recover it. 00:27:32.742 [2024-07-25 14:54:53.019953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.742 [2024-07-25 14:54:53.020113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.742 [2024-07-25 14:54:53.020131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.742 [2024-07-25 14:54:53.020138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.742 [2024-07-25 14:54:53.020144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.742 [2024-07-25 14:54:53.020161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.742 qpair failed and we were unable to recover it. 00:27:32.742 [2024-07-25 14:54:53.029966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.742 [2024-07-25 14:54:53.030114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.742 [2024-07-25 14:54:53.030133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.742 [2024-07-25 14:54:53.030140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.742 [2024-07-25 14:54:53.030146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:32.742 [2024-07-25 14:54:53.030162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.742 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.040016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.040219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.040238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.040244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.040250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.040266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.050069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.050425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.050443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.050450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.050456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.050472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.060085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.060230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.060249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.060256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.060262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.060278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.070123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.070262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.070280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.070287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.070293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.070309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.080135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.080280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.080305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.080312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.080318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.080334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.090217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.090381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.003 [2024-07-25 14:54:53.090400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.003 [2024-07-25 14:54:53.090407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.003 [2024-07-25 14:54:53.090413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.003 [2024-07-25 14:54:53.090429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.003 qpair failed and we were unable to recover it. 00:27:33.003 [2024-07-25 14:54:53.100154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.003 [2024-07-25 14:54:53.100303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.100322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.100329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.100335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.100351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.110165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.110309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.110327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.110334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.110340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.110357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.120500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.120649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.120669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.120676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.120682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.120702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.130303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.130450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.130469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.130476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.130482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.130498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.140329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.140475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.140493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.140500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.140506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.140523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.150348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.150490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.150509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.150516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.150522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.150538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.160383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.160523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.160541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.160548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.160554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.160570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.170443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.170604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.170626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.170633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.170639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.170656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.180426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.180566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.180584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.180591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.180597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.180614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.190447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.190591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.190610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.190616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.190622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.190639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.200537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.200700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.200718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.200725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.200731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.200747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.210534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.210681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.210700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.210706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.210712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.004 [2024-07-25 14:54:53.210732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.004 qpair failed and we were unable to recover it. 00:27:33.004 [2024-07-25 14:54:53.220554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.004 [2024-07-25 14:54:53.220696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.004 [2024-07-25 14:54:53.220715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.004 [2024-07-25 14:54:53.220722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.004 [2024-07-25 14:54:53.220728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.220744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.230500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.230648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.230666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.230673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.230679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.230696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.240609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.240756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.240775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.240782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.240788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.240804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.250646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.250792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.250810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.250817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.250823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.250839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.260569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.260716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.260738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.260745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.260751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.260768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.270691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.270838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.270857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.270864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.270870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.270886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.280711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.280859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.280877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.280884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.280890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.280907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.005 [2024-07-25 14:54:53.290760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.005 [2024-07-25 14:54:53.290909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.005 [2024-07-25 14:54:53.290927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.005 [2024-07-25 14:54:53.290935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.005 [2024-07-25 14:54:53.290940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.005 [2024-07-25 14:54:53.290957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.005 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.300748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.300896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.300915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.300922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.300929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.300950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.310745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.310889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.310908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.310915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.310921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.310937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.320831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.320974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.320992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.320999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.321005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.321022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.330859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.331008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.331026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.331033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.331039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.331062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.340882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.341027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.341051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.341059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.341064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.341081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.350906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.351055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.351076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.264 [2024-07-25 14:54:53.351083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.264 [2024-07-25 14:54:53.351089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.264 [2024-07-25 14:54:53.351106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.264 qpair failed and we were unable to recover it. 00:27:33.264 [2024-07-25 14:54:53.360938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.264 [2024-07-25 14:54:53.361086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.264 [2024-07-25 14:54:53.361105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.361112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.361118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.361135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.370975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.371128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.371147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.371154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.371160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.371177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.380989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.381145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.381164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.381171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.381178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.381194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.391013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.391364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.391382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.391389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.391399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.391415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.401058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.401205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.401223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.401230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.401236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.401252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.410999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.411149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.411167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.411174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.411180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.411197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.421027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.421176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.421195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.421202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.421208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.421224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.431148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.431297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.431315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.431322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.431329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.431346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.441096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.441246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.441265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.441272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.441278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.441295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.451207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.451351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.451370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.451377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.451382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.451399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.461221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.461368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.461387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.461394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.461400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.461416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.471252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.471396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.471415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.471421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.471427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.471444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.481266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.481407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.481426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.481433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.481443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.481460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.491233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.491393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.491411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.491418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.491424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.491440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.501336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.501483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.501502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.501509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.501515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.501531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.511371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.511517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.511535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.511543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.511548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.511565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.265 [2024-07-25 14:54:53.521401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.265 [2024-07-25 14:54:53.521545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.265 [2024-07-25 14:54:53.521564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.265 [2024-07-25 14:54:53.521571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.265 [2024-07-25 14:54:53.521577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.265 [2024-07-25 14:54:53.521593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.265 qpair failed and we were unable to recover it. 00:27:33.266 [2024-07-25 14:54:53.531441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.266 [2024-07-25 14:54:53.531589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.266 [2024-07-25 14:54:53.531608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.266 [2024-07-25 14:54:53.531615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.266 [2024-07-25 14:54:53.531621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.266 [2024-07-25 14:54:53.531637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.266 qpair failed and we were unable to recover it. 00:27:33.266 [2024-07-25 14:54:53.541388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.266 [2024-07-25 14:54:53.541533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.266 [2024-07-25 14:54:53.541552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.266 [2024-07-25 14:54:53.541559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.266 [2024-07-25 14:54:53.541565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.266 [2024-07-25 14:54:53.541581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.266 qpair failed and we were unable to recover it. 00:27:33.266 [2024-07-25 14:54:53.551487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.266 [2024-07-25 14:54:53.551634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.266 [2024-07-25 14:54:53.551652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.266 [2024-07-25 14:54:53.551658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.266 [2024-07-25 14:54:53.551664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.266 [2024-07-25 14:54:53.551682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.266 qpair failed and we were unable to recover it. 00:27:33.526 [2024-07-25 14:54:53.561447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.526 [2024-07-25 14:54:53.561604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.526 [2024-07-25 14:54:53.561623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.526 [2024-07-25 14:54:53.561631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.526 [2024-07-25 14:54:53.561638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.526 [2024-07-25 14:54:53.561654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.526 qpair failed and we were unable to recover it. 00:27:33.526 [2024-07-25 14:54:53.571566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.571712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.571730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.571737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.571747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.571764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.581552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.581914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.581933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.581939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.581945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.581961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.591595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.591737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.591756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.591763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.591769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.591786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.601615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.601766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.601786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.601794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.601801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.601818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.611612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.611760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.611778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.611785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.611791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.611808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.621624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.621786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.621805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.621812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.621818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.621834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.631739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.631882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.631900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.631907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.631913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.631929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.641803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.641954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.641973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.641980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.641986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.642003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.651807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.651953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.651972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.651979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.651985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.652002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.661823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.661972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.661991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.662002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.662008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.662025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.671858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.672000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.672018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.672025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.672031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.672056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.681846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.681987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.682005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.682012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.682017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.682034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.691924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.692078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.527 [2024-07-25 14:54:53.692097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.527 [2024-07-25 14:54:53.692104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.527 [2024-07-25 14:54:53.692110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.527 [2024-07-25 14:54:53.692127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.527 qpair failed and we were unable to recover it. 00:27:33.527 [2024-07-25 14:54:53.701856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.527 [2024-07-25 14:54:53.702007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.702026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.702033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.702039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.702063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.711969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.712118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.712137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.712143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.712149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.712166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.721921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.722083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.722101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.722108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.722114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.722131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.731954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.732110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.732129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.732136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.732141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.732159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.742076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.742223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.742241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.742248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.742254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.742271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.752067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.752219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.752237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.752247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.752254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.752270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.762116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.762264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.762284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.762291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.762297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.762313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.772130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.772282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.772300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.772306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.772312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.772329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.782178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.782331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.782350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.782357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.782363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.782379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.792170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.792315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.792333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.792340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.792346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.792363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.802234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.802381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.802401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.802407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.802413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.802430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.528 [2024-07-25 14:54:53.812258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.528 [2024-07-25 14:54:53.812406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.528 [2024-07-25 14:54:53.812424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.528 [2024-07-25 14:54:53.812431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.528 [2024-07-25 14:54:53.812437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.528 [2024-07-25 14:54:53.812454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.528 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.822264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.822426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.822445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.822452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.822458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.822475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.832484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.832631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.832650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.832657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.832662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.832679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.842290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.842436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.842454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.842464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.842470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.842487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.852372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.852517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.852535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.852542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.852548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.852565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.862390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.862533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.862552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.862559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.862566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.862583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.872361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.872506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.872524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.872531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.872537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.872553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.882448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.882599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.882617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.882624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.789 [2024-07-25 14:54:53.882630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.789 [2024-07-25 14:54:53.882646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.789 qpair failed and we were unable to recover it. 00:27:33.789 [2024-07-25 14:54:53.892427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.789 [2024-07-25 14:54:53.892577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.789 [2024-07-25 14:54:53.892595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.789 [2024-07-25 14:54:53.892602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.892608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.892624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.902523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.902665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.902685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.902692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.902697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.902714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.912531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.912691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.912710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.912717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.912723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.912739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.922581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.922752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.922770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.922777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.922783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.922800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.932544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.932896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.932914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.932924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.932930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.932947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.942559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.942708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.942727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.942733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.942739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.942756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.952689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.952876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.952894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.952902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.952907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.952924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.962627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.962770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.962789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.962796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.962802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.962818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.972657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.972803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.972821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.972829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.972834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.972851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.982682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.982827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.982846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.982853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.982859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.982876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:53.992778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:53.992932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:53.992950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:53.992957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:53.992962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:53.992979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:54.002836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:54.002983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:54.003002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:54.003009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:54.003015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:54.003032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:54.012843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:54.012986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:54.013005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:54.013011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:54.013017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.790 [2024-07-25 14:54:54.013034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.790 qpair failed and we were unable to recover it. 00:27:33.790 [2024-07-25 14:54:54.022851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.790 [2024-07-25 14:54:54.022992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.790 [2024-07-25 14:54:54.023014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.790 [2024-07-25 14:54:54.023022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.790 [2024-07-25 14:54:54.023028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.023051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-07-25 14:54:54.032889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.791 [2024-07-25 14:54:54.033035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.791 [2024-07-25 14:54:54.033061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.791 [2024-07-25 14:54:54.033068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.791 [2024-07-25 14:54:54.033074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.033091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-07-25 14:54:54.042925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.791 [2024-07-25 14:54:54.043159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.791 [2024-07-25 14:54:54.043178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.791 [2024-07-25 14:54:54.043185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.791 [2024-07-25 14:54:54.043191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.043207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-07-25 14:54:54.052956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.791 [2024-07-25 14:54:54.053124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.791 [2024-07-25 14:54:54.053143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.791 [2024-07-25 14:54:54.053150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.791 [2024-07-25 14:54:54.053156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.053173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-07-25 14:54:54.062975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.791 [2024-07-25 14:54:54.063128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.791 [2024-07-25 14:54:54.063147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.791 [2024-07-25 14:54:54.063154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.791 [2024-07-25 14:54:54.063160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.063177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:33.791 [2024-07-25 14:54:54.072994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.791 [2024-07-25 14:54:54.073144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.791 [2024-07-25 14:54:54.073163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.791 [2024-07-25 14:54:54.073170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.791 [2024-07-25 14:54:54.073176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:33.791 [2024-07-25 14:54:54.073192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.791 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.083038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.083184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.083203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.083210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.083216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.083233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.093110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.093272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.093291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.093297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.093304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.093321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.103083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.103230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.103249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.103256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.103262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.103278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.113120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.113266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.113288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.113295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.113301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.113318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.123152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.123302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.123320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.123327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.123333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.123350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.133184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.133330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.133349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.133356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.133362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.133378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.143218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.143366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.143384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.143391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.143398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.143415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.153222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.153367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.153385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.153392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.153398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.153418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.052 qpair failed and we were unable to recover it. 00:27:34.052 [2024-07-25 14:54:54.163270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.052 [2024-07-25 14:54:54.163414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.052 [2024-07-25 14:54:54.163433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.052 [2024-07-25 14:54:54.163440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.052 [2024-07-25 14:54:54.163446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.052 [2024-07-25 14:54:54.163463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.173290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.173435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.173453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.173460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.173466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.173484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.183303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.183665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.183684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.183690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.183696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.183713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.193348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.193496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.193514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.193521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.193527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.193543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.203382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.203531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.203553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.203560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.203566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.203583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.213404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.213548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.213567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.213574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.213580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.213596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.223428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.223576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.223594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.223601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.223607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.223624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.233473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.233615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.233633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.233640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.233646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.233663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.243419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.243782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.243801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.243808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.243814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.243834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.253523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.253671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.253690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.253697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.253703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.253720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.263542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.263691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.263709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.263716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.263722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.263738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.273578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.273724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.273743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.273750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.273756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.273773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.283591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.283729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.283748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.283755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.283760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.283777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.053 qpair failed and we were unable to recover it. 00:27:34.053 [2024-07-25 14:54:54.293609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.053 [2024-07-25 14:54:54.293755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.053 [2024-07-25 14:54:54.293777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.053 [2024-07-25 14:54:54.293784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.053 [2024-07-25 14:54:54.293789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.053 [2024-07-25 14:54:54.293806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.054 qpair failed and we were unable to recover it. 00:27:34.054 [2024-07-25 14:54:54.303579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.054 [2024-07-25 14:54:54.303726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.054 [2024-07-25 14:54:54.303745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.054 [2024-07-25 14:54:54.303753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.054 [2024-07-25 14:54:54.303759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.054 [2024-07-25 14:54:54.303775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.054 qpair failed and we were unable to recover it. 00:27:34.054 [2024-07-25 14:54:54.313679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.054 [2024-07-25 14:54:54.313825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.054 [2024-07-25 14:54:54.313844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.054 [2024-07-25 14:54:54.313851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.054 [2024-07-25 14:54:54.313857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.054 [2024-07-25 14:54:54.313874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.054 qpair failed and we were unable to recover it. 00:27:34.054 [2024-07-25 14:54:54.323707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.054 [2024-07-25 14:54:54.323851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.054 [2024-07-25 14:54:54.323870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.054 [2024-07-25 14:54:54.323877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.054 [2024-07-25 14:54:54.323883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.054 [2024-07-25 14:54:54.323900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.054 qpair failed and we were unable to recover it. 00:27:34.054 [2024-07-25 14:54:54.333748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.054 [2024-07-25 14:54:54.333895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.054 [2024-07-25 14:54:54.333914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.054 [2024-07-25 14:54:54.333920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.054 [2024-07-25 14:54:54.333930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.054 [2024-07-25 14:54:54.333946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.054 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.343721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.343866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.343885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.343892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.343898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.343915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.353778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.353930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.353948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.353955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.353961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.353978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.363827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.363972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.363990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.363997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.364003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.364019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.373872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.374019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.374037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.374049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.374056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.374073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.383896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.384050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.384071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.384078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.384084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.384100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.393964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.315 [2024-07-25 14:54:54.394120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.315 [2024-07-25 14:54:54.394139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.315 [2024-07-25 14:54:54.394146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.315 [2024-07-25 14:54:54.394152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.315 [2024-07-25 14:54:54.394169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.315 qpair failed and we were unable to recover it. 00:27:34.315 [2024-07-25 14:54:54.403960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.404109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.404128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.404134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.404141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.404158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.414025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.414181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.414200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.414207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.414213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.414230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.424018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.424170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.424190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.424196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.424206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.424222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.434051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.434200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.434218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.434225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.434231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.434248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.444069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.444211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.444229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.444236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.444242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.444258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.454111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.454257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.454275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.454282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.454288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.454304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.464129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.464278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.464296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.464303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.464309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.464326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.474157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.474308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.474327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.474334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.474340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.474356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.484189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.484338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.484357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.484364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.484370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.484386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.494226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.494377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.494395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.494402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.494408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.494424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.504244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.504412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.504430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.504437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.504443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.504460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.514280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.514424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.514442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.514449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.514458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.514475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.524305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.524447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.524466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.316 [2024-07-25 14:54:54.524472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.316 [2024-07-25 14:54:54.524478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.316 [2024-07-25 14:54:54.524494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.316 qpair failed and we were unable to recover it. 00:27:34.316 [2024-07-25 14:54:54.534354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.316 [2024-07-25 14:54:54.534517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.316 [2024-07-25 14:54:54.534536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.534543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.534548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.534565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.544308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.544454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.544473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.544480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.544486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.544503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.554382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.554525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.554544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.554550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.554556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.554573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.564421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.564565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.564584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.564591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.564597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.564614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.574450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.574596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.574615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.574622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.574628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.574644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.584473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.584619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.584638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.584645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.584651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.584667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.594485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.594630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.594649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.594656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.594662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.594678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.317 [2024-07-25 14:54:54.604536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.317 [2024-07-25 14:54:54.604685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.317 [2024-07-25 14:54:54.604704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.317 [2024-07-25 14:54:54.604711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.317 [2024-07-25 14:54:54.604720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.317 [2024-07-25 14:54:54.604736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.317 qpair failed and we were unable to recover it. 00:27:34.578 [2024-07-25 14:54:54.614499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.578 [2024-07-25 14:54:54.614647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.578 [2024-07-25 14:54:54.614665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.578 [2024-07-25 14:54:54.614672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.578 [2024-07-25 14:54:54.614678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.578 [2024-07-25 14:54:54.614694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.578 qpair failed and we were unable to recover it. 00:27:34.578 [2024-07-25 14:54:54.624585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.578 [2024-07-25 14:54:54.624735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.578 [2024-07-25 14:54:54.624754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.578 [2024-07-25 14:54:54.624761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.578 [2024-07-25 14:54:54.624767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.578 [2024-07-25 14:54:54.624784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.578 qpair failed and we were unable to recover it. 00:27:34.578 [2024-07-25 14:54:54.634602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.578 [2024-07-25 14:54:54.634751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.578 [2024-07-25 14:54:54.634769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.578 [2024-07-25 14:54:54.634776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.578 [2024-07-25 14:54:54.634782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.578 [2024-07-25 14:54:54.634799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.578 qpair failed and we were unable to recover it. 00:27:34.578 [2024-07-25 14:54:54.644658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.578 [2024-07-25 14:54:54.644801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.644820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.644827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.644833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.644850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.654689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.654839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.654857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.654864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.654870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.654886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.664688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.664832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.664850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.664858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.664864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.664881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.674784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.674927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.674946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.674952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.674959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.674976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.684736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.684887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.684906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.684913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.684919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.684935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.694788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.694936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.694954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.694965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.694971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.694988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.704807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.704958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.704977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.704984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.704990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.705006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.714842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.714988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.715006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.715013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.715019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.715035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.724805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.724950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.724968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.724976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.724981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.724998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.734919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.735156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.735175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.735182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.735188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.735205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.744931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.745081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.745100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.745106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.745112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.745128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.754945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.755101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.755120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.755127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.755132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.755149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.764980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.765131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.765150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.765157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.765163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.579 [2024-07-25 14:54:54.765180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.579 qpair failed and we were unable to recover it. 00:27:34.579 [2024-07-25 14:54:54.775025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.579 [2024-07-25 14:54:54.775177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.579 [2024-07-25 14:54:54.775196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.579 [2024-07-25 14:54:54.775203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.579 [2024-07-25 14:54:54.775209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.775226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.785036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.785190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.785208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.785219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.785225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.785241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.795067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.795216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.795234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.795241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.795247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.795263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.805022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.805173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.805192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.805199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.805205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.805221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.815056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.815202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.815220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.815227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.815233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.815249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.825167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.825320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.825339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.825346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.825352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.825369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.835169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.835315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.835333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.835340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.835346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.835363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.845266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.845420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.845438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.845445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.845451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.845467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.855259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.855405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.855424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.855431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.855436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.855453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.580 [2024-07-25 14:54:54.865272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.580 [2024-07-25 14:54:54.865424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.580 [2024-07-25 14:54:54.865443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.580 [2024-07-25 14:54:54.865450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.580 [2024-07-25 14:54:54.865456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.580 [2024-07-25 14:54:54.865472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.580 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.875301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.875445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.875463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.875473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.875479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.875496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.885329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.885475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.885493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.885500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.885506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.885522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.895371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.895519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.895538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.895545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.895551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.895567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.905397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.905545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.905564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.905570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.905577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.905593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.915338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.915482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.915500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.915507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.915513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.915529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.925376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.925518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.925537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.925544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.925550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.925567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.935469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.935616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.935634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.935641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.935647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.935664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.945506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.945653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.945672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.945679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.945685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.945701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.955533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.955682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.955701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.955708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.955714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.955730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.965564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.965709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.965731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.965739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.965746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.965762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.975572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.975758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.975777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.841 [2024-07-25 14:54:54.975783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.841 [2024-07-25 14:54:54.975790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.841 [2024-07-25 14:54:54.975806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.841 qpair failed and we were unable to recover it. 00:27:34.841 [2024-07-25 14:54:54.985600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.841 [2024-07-25 14:54:54.985745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.841 [2024-07-25 14:54:54.985763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:54.985770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:54.985776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:54.985792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:54.995565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:54.995711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:54.995730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:54.995737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:54.995742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:54.995759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.005675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.005824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.005843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.005851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.005857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.005873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.015636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.015788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.015807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.015814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.015820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.015837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.025727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.025876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.025895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.025902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.025908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.025924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.035762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.035911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.035931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.035938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.035944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.035961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.045774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.045922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.045940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.045947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.045953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.045970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.055834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.055983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.056005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.056012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.056018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.056034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.065843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.065996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.066015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.066022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.066027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.066051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.075797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.075950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.075969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.075976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.075981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.075998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.085895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.086038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.086062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.086068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.086075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.086092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.095962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.096130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.096149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.096155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.096161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.096181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.105973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.106130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.106150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.842 [2024-07-25 14:54:55.106157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.842 [2024-07-25 14:54:55.106163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.842 [2024-07-25 14:54:55.106180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.842 qpair failed and we were unable to recover it. 00:27:34.842 [2024-07-25 14:54:55.115968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.842 [2024-07-25 14:54:55.116123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.842 [2024-07-25 14:54:55.116141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.843 [2024-07-25 14:54:55.116148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.843 [2024-07-25 14:54:55.116154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.843 [2024-07-25 14:54:55.116171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.843 qpair failed and we were unable to recover it. 00:27:34.843 [2024-07-25 14:54:55.125985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.843 [2024-07-25 14:54:55.126140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.843 [2024-07-25 14:54:55.126158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.843 [2024-07-25 14:54:55.126165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.843 [2024-07-25 14:54:55.126171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:34.843 [2024-07-25 14:54:55.126188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.843 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.136062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.136207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.136232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.136240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.136246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.136263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.146091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.146237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.146259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.146267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.146273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.146290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.156081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.156235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.156254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.156261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.156268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.156285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.166145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.166284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.166303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.166311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.166317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.166334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.176102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.176247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.176265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.176272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.176278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.176295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.186141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.186332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.186351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.186358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.186364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.186384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.196250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.196394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.196413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.196420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.196426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.196442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.206211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.206362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.206381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.206387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.206393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.206410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.216309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.216457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.216475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.216482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.216488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.216505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.226270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.226447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.226465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.226472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.226478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.103 [2024-07-25 14:54:55.226495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.103 qpair failed and we were unable to recover it. 00:27:35.103 [2024-07-25 14:54:55.236335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.103 [2024-07-25 14:54:55.236517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.103 [2024-07-25 14:54:55.236539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.103 [2024-07-25 14:54:55.236546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.103 [2024-07-25 14:54:55.236552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.236569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.246365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.246512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.246531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.246538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.246544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.246561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.256333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.256481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.256499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.256506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.256512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.256529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.266355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.266498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.266517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.266524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.266530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.266547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.276461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.276607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.276626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.276632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.276638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.276658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.286494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.286638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.286658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.286665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.286670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.286687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.296503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.296647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.296665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.296672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.296678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.296696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.306463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.306614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.306633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.306640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.306646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.306662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.316578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.316723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.316742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.316749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.316754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.316771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.326531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.326676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.326701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.326707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.326713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.326729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.336632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.336778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.336797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.336803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.336809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.336826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.346578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.346727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.346746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.346753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.346759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.346776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.356609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.356754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.356773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.356780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.356785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.104 [2024-07-25 14:54:55.356802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.104 qpair failed and we were unable to recover it. 00:27:35.104 [2024-07-25 14:54:55.366723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.104 [2024-07-25 14:54:55.366870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.104 [2024-07-25 14:54:55.366889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.104 [2024-07-25 14:54:55.366896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.104 [2024-07-25 14:54:55.366905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.105 [2024-07-25 14:54:55.366922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.105 qpair failed and we were unable to recover it. 00:27:35.105 [2024-07-25 14:54:55.376670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.105 [2024-07-25 14:54:55.376822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.105 [2024-07-25 14:54:55.376841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.105 [2024-07-25 14:54:55.376848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.105 [2024-07-25 14:54:55.376853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.105 [2024-07-25 14:54:55.376870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.105 qpair failed and we were unable to recover it. 00:27:35.105 [2024-07-25 14:54:55.386722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.105 [2024-07-25 14:54:55.386877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.105 [2024-07-25 14:54:55.386895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.105 [2024-07-25 14:54:55.386902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.105 [2024-07-25 14:54:55.386908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.105 [2024-07-25 14:54:55.386924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.105 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 14:54:55.396723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.365 [2024-07-25 14:54:55.396879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.365 [2024-07-25 14:54:55.396897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.365 [2024-07-25 14:54:55.396905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.365 [2024-07-25 14:54:55.396911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.365 [2024-07-25 14:54:55.396928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.365 qpair failed and we were unable to recover it. 00:27:35.365 [2024-07-25 14:54:55.406812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.365 [2024-07-25 14:54:55.406958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.365 [2024-07-25 14:54:55.406978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.365 [2024-07-25 14:54:55.406984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.406991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.407008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.416840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.417208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.417227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.417234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.417240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.417255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.426816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.426967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.426986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.426993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.426999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.427016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.436930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.437085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.437104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.437111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.437116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.437133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.446952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.447099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.447118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.447125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.447131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.447148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.456925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.457083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.457102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.457109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.457119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.457136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.467012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.467169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.467187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.467195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.467201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.467217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.477026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.477206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.477226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.477232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.477238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.477255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.487075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.487217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.487236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.487243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.487249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.487266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.497098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.497250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.497269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.497275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.497281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.497298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.507119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.507269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.507288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.507295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.507301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.507317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.517158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.517306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.517324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.517331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.517337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.517353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.527194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.527340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.527358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.527365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.366 [2024-07-25 14:54:55.527370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.366 [2024-07-25 14:54:55.527387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.366 qpair failed and we were unable to recover it. 00:27:35.366 [2024-07-25 14:54:55.537220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.366 [2024-07-25 14:54:55.537366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.366 [2024-07-25 14:54:55.537385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.366 [2024-07-25 14:54:55.537392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.537398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.537415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.547243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.547390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.547409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.547415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.547424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.547441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.557275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.557415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.557434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.557440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.557446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.557463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.567507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.567646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.567665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.567672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.567678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.567695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.577332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.577477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.577495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.577502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.577508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.577524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.587357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.587501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.587520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.587527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.587533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.587550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.597384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.597528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.597547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.597554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.597560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.597577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.607405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.607551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.607569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.607576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.607582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.607599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.617458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.617602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.617620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.617627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.617633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.617650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.627384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.627530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.627548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.627555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.627561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.627578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.637497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.637640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.637658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.637668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.637674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.637691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.367 [2024-07-25 14:54:55.647525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.367 [2024-07-25 14:54:55.647672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.367 [2024-07-25 14:54:55.647690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.367 [2024-07-25 14:54:55.647697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.367 [2024-07-25 14:54:55.647703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.367 [2024-07-25 14:54:55.647719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.367 qpair failed and we were unable to recover it. 00:27:35.628 [2024-07-25 14:54:55.657546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.628 [2024-07-25 14:54:55.657692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.628 [2024-07-25 14:54:55.657711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.628 [2024-07-25 14:54:55.657718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.628 [2024-07-25 14:54:55.657724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.628 [2024-07-25 14:54:55.657742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.628 qpair failed and we were unable to recover it. 00:27:35.628 [2024-07-25 14:54:55.667579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.628 [2024-07-25 14:54:55.667725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.628 [2024-07-25 14:54:55.667744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.628 [2024-07-25 14:54:55.667751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.628 [2024-07-25 14:54:55.667757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.667773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.677604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.677748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.677767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.677774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.677779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.677796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.687638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.687785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.687803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.687810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.687816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.687832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.697685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.697838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.697856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.697862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.697868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.697885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.707704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.707850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.707868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.707874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.707880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.707897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.717732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.717873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.717891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.717898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.717904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.717921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.727756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.727899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.727918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.727929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.727934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.727951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.737788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.737931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.737950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.737956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.737963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.737979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.747829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.747989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.748008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.748015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.748021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.748037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.757844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.757989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.758007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.758014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.758020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.758037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.767868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.768009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.768027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.768034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.768040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.768062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.777909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.778063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.778081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.778088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.778093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.778110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.787937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.788093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.788113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.629 [2024-07-25 14:54:55.788120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.629 [2024-07-25 14:54:55.788126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.629 [2024-07-25 14:54:55.788143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.629 qpair failed and we were unable to recover it. 00:27:35.629 [2024-07-25 14:54:55.797954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.629 [2024-07-25 14:54:55.798101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.629 [2024-07-25 14:54:55.798119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.798126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.798132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.798149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.807990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.808143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.808162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.808169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.808175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.808191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.818013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.818369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.818388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.818398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.818404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.818420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.828036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.828187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.828205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.828212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.828218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.828235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.838080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.838234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.838252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.838259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.838265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.838281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.848103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.848252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.848270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.848277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.848283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.848300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.858122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.858267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.858286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.858293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.858298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.858315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.868150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.868315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.868333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.868341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.868346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.868363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.878164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.878309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.878328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.878334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.878340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.878357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.888205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.888352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.888371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.888378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.888384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.888401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.898172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.898323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.898341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.898348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.898354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.898371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.908252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.908396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.908418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.908425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.630 [2024-07-25 14:54:55.908430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.630 [2024-07-25 14:54:55.908447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.630 qpair failed and we were unable to recover it. 00:27:35.630 [2024-07-25 14:54:55.918300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.630 [2024-07-25 14:54:55.918446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.630 [2024-07-25 14:54:55.918464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.630 [2024-07-25 14:54:55.918471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.631 [2024-07-25 14:54:55.918477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.631 [2024-07-25 14:54:55.918493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.631 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.928319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.928466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.928484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.928491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.928497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.928514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.938330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.938479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.938496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.938503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.938509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.938525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.948417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.948578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.948596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.948603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.948608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.948625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.958404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.958554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.958573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.958580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.958586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.958602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.968371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.968519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.968537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.968544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.968551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.968567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.978476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.978624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.978642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.978649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.978655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.978672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.988471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.988619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.988638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.988645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.988651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.988668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:55.998512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:55.998656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:55.998678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:55.998685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:55.998691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:55.998708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:56.008545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.892 [2024-07-25 14:54:56.008685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.892 [2024-07-25 14:54:56.008704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.892 [2024-07-25 14:54:56.008711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.892 [2024-07-25 14:54:56.008717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.892 [2024-07-25 14:54:56.008733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.892 qpair failed and we were unable to recover it. 00:27:35.892 [2024-07-25 14:54:56.018513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.018664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.018683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.018690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.018696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.018712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.028597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.028746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.028764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.028771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.028777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.028794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.038614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.038764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.038783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.038789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.038796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.038816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.048591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.048733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.048752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.048759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.048765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.048781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.058697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.058847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.058865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.058872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.058878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.058895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.068705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.068859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.068878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.068884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.068890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.068907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.078747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.078893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.078912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.078919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.078925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.078942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.088775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.088920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.088946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.088953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.088958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.088975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.098744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.098890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.098909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.098915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.098921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.098938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.108829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.108979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.108998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.109005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.109011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.109028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.118853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.118999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.119017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.119024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.119030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.119053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.128891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.129039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.129062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.129069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.129075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.129095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.138905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.139057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.893 [2024-07-25 14:54:56.139075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.893 [2024-07-25 14:54:56.139082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.893 [2024-07-25 14:54:56.139088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.893 [2024-07-25 14:54:56.139105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.893 qpair failed and we were unable to recover it. 00:27:35.893 [2024-07-25 14:54:56.148943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.893 [2024-07-25 14:54:56.149097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.894 [2024-07-25 14:54:56.149117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.894 [2024-07-25 14:54:56.149125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.894 [2024-07-25 14:54:56.149131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.894 [2024-07-25 14:54:56.149147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.894 qpair failed and we were unable to recover it. 00:27:35.894 [2024-07-25 14:54:56.158991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.894 [2024-07-25 14:54:56.159154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.894 [2024-07-25 14:54:56.159173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.894 [2024-07-25 14:54:56.159180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.894 [2024-07-25 14:54:56.159186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.894 [2024-07-25 14:54:56.159203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.894 qpair failed and we were unable to recover it. 00:27:35.894 [2024-07-25 14:54:56.169003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.894 [2024-07-25 14:54:56.169155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.894 [2024-07-25 14:54:56.169175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.894 [2024-07-25 14:54:56.169181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.894 [2024-07-25 14:54:56.169188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.894 [2024-07-25 14:54:56.169204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.894 qpair failed and we were unable to recover it. 00:27:35.894 [2024-07-25 14:54:56.179028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.894 [2024-07-25 14:54:56.179197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.894 [2024-07-25 14:54:56.179219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.894 [2024-07-25 14:54:56.179226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.894 [2024-07-25 14:54:56.179232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:35.894 [2024-07-25 14:54:56.179249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.894 qpair failed and we were unable to recover it. 00:27:36.155 [2024-07-25 14:54:56.189060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.155 [2024-07-25 14:54:56.189208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.155 [2024-07-25 14:54:56.189226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.155 [2024-07-25 14:54:56.189233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.155 [2024-07-25 14:54:56.189239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.155 [2024-07-25 14:54:56.189256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.155 qpair failed and we were unable to recover it. 00:27:36.155 [2024-07-25 14:54:56.199064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.155 [2024-07-25 14:54:56.199209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.155 [2024-07-25 14:54:56.199228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.199234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.199241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.199257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.209101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.209248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.209266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.209273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.209279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.209295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.219122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.219272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.219290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.219297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.219303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.219323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.229132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.229502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.229521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.229527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.229533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.229550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.239182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.239328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.239347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.239354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.239360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.239376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.249212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.249354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.249373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.249379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.249385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.249402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.259262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.259408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.259427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.259434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.259440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.259457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.269213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.269363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.269384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.269391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.269397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.269413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.279292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.279440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.279458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.279466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.279472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.279488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.289321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.289467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.289486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.289493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.289499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.289515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.299393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.299540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.299557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.299564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.299570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.299587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.309519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.309677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.309696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.309702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.309712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.309729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.319414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.319563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.319581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.319588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.156 [2024-07-25 14:54:56.319594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.156 [2024-07-25 14:54:56.319610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.156 qpair failed and we were unable to recover it. 00:27:36.156 [2024-07-25 14:54:56.329453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.156 [2024-07-25 14:54:56.329596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.156 [2024-07-25 14:54:56.329615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.156 [2024-07-25 14:54:56.329622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.329628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.329644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.339482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.339631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.339650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.339657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.339663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.339679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.349519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.349673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.349692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.349699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.349705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.349721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.359546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.359694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.359713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.359720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.359726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.359744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.369574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.369720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.369738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.369745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.369751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.369768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.379605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.379748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.379767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.379774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.379779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.379796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.389592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.389751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.389769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.389776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.389782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.389800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.399654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.399798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.399817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.399824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.399834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.399851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.409685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.409832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.409850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.409857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.409863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.409879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.419723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.419874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.419893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.419900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.419906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.419922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.429737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.429887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.429906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.429913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.429919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.429935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.157 [2024-07-25 14:54:56.439766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.157 [2024-07-25 14:54:56.439914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.157 [2024-07-25 14:54:56.439932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.157 [2024-07-25 14:54:56.439939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.157 [2024-07-25 14:54:56.439945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.157 [2024-07-25 14:54:56.439963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.157 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.449764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.449955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.449974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.449981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.449986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.418 [2024-07-25 14:54:56.450003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.459817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.459994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.460013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.460020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.460026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaf3ed0 00:27:36.418 [2024-07-25 14:54:56.460048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.469938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.470104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.470129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.470139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.470145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82b0000b90 00:27:36.418 [2024-07-25 14:54:56.470166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.479840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.479983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.480003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.480010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.480016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82b0000b90 00:27:36.418 [2024-07-25 14:54:56.480033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.480177] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:36.418 A controller has encountered a failure and is being reset. 00:27:36.418 [2024-07-25 14:54:56.489958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.490159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.490194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.490206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.490216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82b8000b90 00:27:36.418 [2024-07-25 14:54:56.490241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.499959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.418 [2024-07-25 14:54:56.500122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.418 [2024-07-25 14:54:56.500142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.418 [2024-07-25 14:54:56.500150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.418 [2024-07-25 14:54:56.500156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f82b8000b90 00:27:36.418 [2024-07-25 14:54:56.500175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.418 qpair failed and we were unable to recover it. 00:27:36.418 [2024-07-25 14:54:56.500286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02010 (9): Bad file descriptor 00:27:36.418 Controller properly reset. 00:27:36.418 Initializing NVMe Controllers 00:27:36.418 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:36.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:36.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:36.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:36.418 Initialization complete. Launching workers. 00:27:36.418 Starting thread on core 1 00:27:36.418 Starting thread on core 2 00:27:36.418 Starting thread on core 3 00:27:36.418 Starting thread on core 0 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:36.418 00:27:36.418 real 0m11.288s 00:27:36.418 user 0m20.670s 00:27:36.418 sys 0m4.278s 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:36.418 ************************************ 00:27:36.418 END TEST nvmf_target_disconnect_tc2 00:27:36.418 ************************************ 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.418 rmmod nvme_tcp 00:27:36.418 rmmod nvme_fabrics 00:27:36.418 rmmod nvme_keyring 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2486619 ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2486619 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2486619 ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2486619 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2486619 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2486619' 00:27:36.418 killing process with pid 2486619 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2486619 00:27:36.418 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2486619 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.678 14:54:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.219 14:54:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.219 00:27:39.219 real 0m19.368s 00:27:39.219 user 0m47.693s 00:27:39.219 sys 0m8.710s 00:27:39.219 14:54:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.219 14:54:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 ************************************ 00:27:39.219 END TEST nvmf_target_disconnect 00:27:39.219 ************************************ 00:27:39.219 14:54:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:39.219 14:54:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:39.219 14:54:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.219 14:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 14:54:59 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:39.219 00:27:39.219 real 21m0.006s 00:27:39.219 user 45m16.060s 00:27:39.219 sys 6m13.462s 00:27:39.219 14:54:59 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.219 14:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 ************************************ 00:27:39.219 END TEST nvmf_tcp 00:27:39.219 ************************************ 00:27:39.219 14:54:59 -- common/autotest_common.sh@1142 -- # return 0 00:27:39.219 14:54:59 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:39.219 14:54:59 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:39.219 14:54:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:39.219 14:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.219 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 ************************************ 00:27:39.219 START TEST spdkcli_nvmf_tcp 00:27:39.219 ************************************ 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:39.219 * Looking for test storage... 00:27:39.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2488152 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2488152 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2488152 ']' 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.219 14:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.219 [2024-07-25 14:54:59.246434] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:27:39.219 [2024-07-25 14:54:59.246483] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488152 ] 00:27:39.219 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.219 [2024-07-25 14:54:59.299166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.219 [2024-07-25 14:54:59.381486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.219 [2024-07-25 14:54:59.381493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.790 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.790 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:39.790 14:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:39.790 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.790 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.050 14:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:40.050 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:40.050 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:40.050 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:40.050 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:40.050 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:40.050 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:40.050 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:40.050 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:40.050 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:40.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:40.050 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:40.050 ' 00:27:42.590 [2024-07-25 14:55:02.481814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.531 [2024-07-25 14:55:03.657732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:46.101 [2024-07-25 14:55:05.824387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:47.481 [2024-07-25 14:55:07.686247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:48.861 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:48.861 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:48.861 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.861 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.861 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:48.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:48.861 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:49.120 14:55:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:49.380 14:55:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:49.380 14:55:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:49.380 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:49.380 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.380 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.638 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:49.638 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:49.638 14:55:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.638 14:55:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:49.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:49.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:49.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:49.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:49.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:49.638 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:49.638 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:49.638 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:49.638 ' 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:54.911 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:54.911 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:54.911 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:54.911 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2488152 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2488152 ']' 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2488152 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2488152 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:54.911 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2488152' 00:27:54.912 killing process with pid 2488152 00:27:54.912 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2488152 00:27:54.912 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2488152 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2488152 ']' 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2488152 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2488152 ']' 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2488152 00:27:55.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2488152) - No such process 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2488152 is not found' 00:27:55.171 Process with pid 2488152 is not found 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:55.171 00:27:55.171 real 0m16.275s 00:27:55.171 user 0m34.339s 00:27:55.171 sys 0m0.774s 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.171 14:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.171 ************************************ 00:27:55.171 END TEST spdkcli_nvmf_tcp 00:27:55.171 ************************************ 00:27:55.171 14:55:15 -- common/autotest_common.sh@1142 -- # return 0 00:27:55.171 14:55:15 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:55.171 14:55:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:55.171 14:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.171 14:55:15 -- common/autotest_common.sh@10 -- # set +x 00:27:55.171 ************************************ 00:27:55.171 START TEST nvmf_identify_passthru 00:27:55.171 ************************************ 00:27:55.171 14:55:15 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:55.431 * Looking for test storage... 00:27:55.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:55.431 14:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.431 14:55:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.431 14:55:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.431 14:55:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.431 14:55:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.431 14:55:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.431 14:55:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.431 14:55:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:55.431 14:55:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:55.431 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.432 14:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.432 14:55:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.432 14:55:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.432 14:55:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.432 14:55:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.432 14:55:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.432 14:55:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.432 14:55:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:55.432 14:55:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.432 14:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.432 14:55:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:55.432 14:55:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.432 14:55:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.432 14:55:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.715 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.715 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.715 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.715 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.715 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:00.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:00.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:00.716 Found net devices under 0000:86:00.0: cvl_0_0 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:00.716 Found net devices under 0000:86:00.1: cvl_0_1 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.716 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:28:00.716 00:28:00.716 --- 10.0.0.2 ping statistics --- 00:28:00.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.716 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:00.717 00:28:00.717 --- 10.0.0.1 ping statistics --- 00:28:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.717 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.717 14:55:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:00.717 14:55:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:00.717 14:55:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:00.717 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.908 14:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:04.908 14:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:04.908 14:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:04.908 14:55:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:04.908 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2495174 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2495174 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2495174 ']' 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.101 14:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.101 14:55:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:09.101 [2024-07-25 14:55:29.232309] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:28:09.101 [2024-07-25 14:55:29.232356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.101 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.101 [2024-07-25 14:55:29.289434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.101 [2024-07-25 14:55:29.370215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.101 [2024-07-25 14:55:29.370252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.101 [2024-07-25 14:55:29.370259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.101 [2024-07-25 14:55:29.370265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.101 [2024-07-25 14:55:29.370270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.101 [2024-07-25 14:55:29.370327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.101 [2024-07-25 14:55:29.370423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.101 [2024-07-25 14:55:29.370440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.101 [2024-07-25 14:55:29.370441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:10.037 14:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.037 INFO: Log level set to 20 00:28:10.037 INFO: Requests: 00:28:10.037 { 00:28:10.037 "jsonrpc": "2.0", 00:28:10.037 "method": "nvmf_set_config", 00:28:10.037 "id": 1, 00:28:10.037 "params": { 00:28:10.037 "admin_cmd_passthru": { 00:28:10.037 "identify_ctrlr": true 00:28:10.037 } 00:28:10.037 } 00:28:10.037 } 00:28:10.037 00:28:10.037 INFO: response: 00:28:10.037 { 00:28:10.037 "jsonrpc": "2.0", 00:28:10.037 "id": 1, 00:28:10.037 "result": true 00:28:10.037 } 00:28:10.037 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.037 14:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.037 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.037 INFO: Setting log level to 20 00:28:10.037 INFO: Setting log level to 20 00:28:10.037 INFO: Log level set to 20 00:28:10.038 INFO: Log level set to 20 00:28:10.038 INFO: Requests: 00:28:10.038 { 00:28:10.038 "jsonrpc": "2.0", 00:28:10.038 "method": "framework_start_init", 00:28:10.038 "id": 1 00:28:10.038 } 00:28:10.038 00:28:10.038 INFO: Requests: 00:28:10.038 { 00:28:10.038 "jsonrpc": "2.0", 00:28:10.038 "method": "framework_start_init", 00:28:10.038 "id": 1 00:28:10.038 } 00:28:10.038 00:28:10.038 [2024-07-25 14:55:30.140952] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:10.038 INFO: response: 00:28:10.038 { 00:28:10.038 "jsonrpc": "2.0", 00:28:10.038 "id": 1, 00:28:10.038 "result": true 00:28:10.038 } 00:28:10.038 00:28:10.038 INFO: response: 00:28:10.038 { 00:28:10.038 "jsonrpc": "2.0", 00:28:10.038 "id": 1, 00:28:10.038 "result": true 00:28:10.038 } 00:28:10.038 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.038 14:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.038 INFO: Setting log level to 40 00:28:10.038 INFO: Setting log level to 40 00:28:10.038 INFO: Setting log level to 40 00:28:10.038 [2024-07-25 14:55:30.150457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.038 14:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.038 14:55:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.038 14:55:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 Nvme0n1 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 [2024-07-25 14:55:33.032025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 [ 00:28:13.361 { 00:28:13.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:13.361 "subtype": "Discovery", 00:28:13.361 "listen_addresses": [], 00:28:13.361 "allow_any_host": true, 00:28:13.361 "hosts": [] 00:28:13.361 }, 00:28:13.361 { 00:28:13.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.361 "subtype": "NVMe", 00:28:13.361 "listen_addresses": [ 00:28:13.361 { 00:28:13.361 "trtype": "TCP", 00:28:13.361 "adrfam": "IPv4", 00:28:13.361 "traddr": "10.0.0.2", 00:28:13.361 "trsvcid": "4420" 00:28:13.361 } 00:28:13.361 ], 00:28:13.361 "allow_any_host": true, 00:28:13.361 "hosts": [], 00:28:13.361 "serial_number": "SPDK00000000000001", 00:28:13.361 "model_number": "SPDK bdev Controller", 00:28:13.361 "max_namespaces": 1, 00:28:13.361 "min_cntlid": 1, 00:28:13.361 "max_cntlid": 65519, 00:28:13.361 "namespaces": [ 00:28:13.361 { 00:28:13.361 "nsid": 1, 00:28:13.361 "bdev_name": "Nvme0n1", 00:28:13.361 "name": "Nvme0n1", 00:28:13.361 "nguid": "1465311CCD134407B35B23F32960B60D", 00:28:13.361 "uuid": "1465311c-cd13-4407-b35b-23f32960b60d" 00:28:13.361 } 00:28:13.361 ] 00:28:13.361 } 00:28:13.361 ] 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:13.361 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:13.361 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:13.361 14:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.361 rmmod nvme_tcp 00:28:13.361 rmmod nvme_fabrics 00:28:13.361 rmmod nvme_keyring 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2495174 ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2495174 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2495174 ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2495174 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.361 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2495174 00:28:13.362 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:13.362 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:13.362 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2495174' 00:28:13.362 killing process with pid 2495174 00:28:13.362 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2495174 00:28:13.362 14:55:33 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2495174 00:28:14.742 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.742 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.742 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.743 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.743 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.743 14:55:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.743 14:55:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:14.743 14:55:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.283 14:55:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.283 00:28:17.283 real 0m21.641s 00:28:17.283 user 0m29.875s 00:28:17.283 sys 0m4.733s 00:28:17.283 14:55:37 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:17.283 14:55:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:17.283 ************************************ 00:28:17.283 END TEST nvmf_identify_passthru 00:28:17.283 ************************************ 00:28:17.284 14:55:37 -- common/autotest_common.sh@1142 -- # return 0 00:28:17.284 14:55:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:17.284 14:55:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:17.284 14:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.284 14:55:37 -- common/autotest_common.sh@10 -- # set +x 00:28:17.284 ************************************ 00:28:17.284 START TEST nvmf_dif 00:28:17.284 ************************************ 00:28:17.284 14:55:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:17.284 * Looking for test storage... 00:28:17.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.284 14:55:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.284 14:55:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.284 14:55:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.284 14:55:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.284 14:55:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.284 14:55:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.284 14:55:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:17.284 14:55:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:17.284 14:55:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.284 14:55:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.284 14:55:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.284 14:55:37 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.284 14:55:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:22.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:22.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:22.563 Found net devices under 0000:86:00.0: cvl_0_0 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:22.563 Found net devices under 0000:86:00.1: cvl_0_1 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:28:22.563 00:28:22.563 --- 10.0.0.2 ping statistics --- 00:28:22.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.563 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:28:22.563 14:55:42 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:28:22.563 00:28:22.563 --- 10.0.0.1 ping statistics --- 00:28:22.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.564 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:28:22.564 14:55:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.564 14:55:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:22.564 14:55:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:22.564 14:55:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.104 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:25.104 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:25.104 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.104 14:55:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:25.104 14:55:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2500645 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2500645 00:28:25.104 14:55:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2500645 ']' 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.104 14:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:25.104 [2024-07-25 14:55:45.254027] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:28:25.104 [2024-07-25 14:55:45.254073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.104 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.104 [2024-07-25 14:55:45.311010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.104 [2024-07-25 14:55:45.390368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.104 [2024-07-25 14:55:45.390403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.104 [2024-07-25 14:55:45.390410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.104 [2024-07-25 14:55:45.390417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.104 [2024-07-25 14:55:45.390423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.104 [2024-07-25 14:55:45.390441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:26.043 14:55:46 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 14:55:46 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.043 14:55:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:26.043 14:55:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 [2024-07-25 14:55:46.092962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.043 14:55:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 ************************************ 00:28:26.043 START TEST fio_dif_1_default 00:28:26.043 ************************************ 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 bdev_null0 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.043 [2024-07-25 14:55:46.161242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.043 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.044 { 00:28:26.044 "params": { 00:28:26.044 "name": "Nvme$subsystem", 00:28:26.044 "trtype": "$TEST_TRANSPORT", 00:28:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.044 "adrfam": "ipv4", 00:28:26.044 "trsvcid": "$NVMF_PORT", 00:28:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.044 "hdgst": ${hdgst:-false}, 00:28:26.044 "ddgst": ${ddgst:-false} 00:28:26.044 }, 00:28:26.044 "method": "bdev_nvme_attach_controller" 00:28:26.044 } 00:28:26.044 EOF 00:28:26.044 )") 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.044 "params": { 00:28:26.044 "name": "Nvme0", 00:28:26.044 "trtype": "tcp", 00:28:26.044 "traddr": "10.0.0.2", 00:28:26.044 "adrfam": "ipv4", 00:28:26.044 "trsvcid": "4420", 00:28:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.044 "hdgst": false, 00:28:26.044 "ddgst": false 00:28:26.044 }, 00:28:26.044 "method": "bdev_nvme_attach_controller" 00:28:26.044 }' 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.044 14:55:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.304 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:26.304 fio-3.35 00:28:26.304 Starting 1 thread 00:28:26.304 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.522 00:28:38.522 filename0: (groupid=0, jobs=1): err= 0: pid=2501124: Thu Jul 25 14:55:56 2024 00:28:38.522 read: IOPS=180, BW=723KiB/s (740kB/s)(7232KiB/10002msec) 00:28:38.522 slat (nsec): min=6074, max=68724, avg=6370.96, stdev=1688.80 00:28:38.522 clat (usec): min=1502, max=44666, avg=22109.56, stdev=20242.54 00:28:38.522 lat (usec): min=1508, max=44700, avg=22115.93, stdev=20242.47 00:28:38.522 clat percentiles (usec): 00:28:38.522 | 1.00th=[ 1827], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1876], 00:28:38.522 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 2245], 60.00th=[42206], 00:28:38.522 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:28:38.522 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:28:38.522 | 99.99th=[44827] 00:28:38.522 bw ( KiB/s): min= 704, max= 768, per=99.58%, avg=720.84, stdev=26.92, samples=19 00:28:38.522 iops : min= 176, max= 192, avg=180.21, stdev= 6.73, samples=19 00:28:38.522 lat (msec) : 2=49.78%, 4=0.22%, 50=50.00% 00:28:38.522 cpu : usr=94.73%, sys=5.01%, ctx=18, majf=0, minf=260 00:28:38.522 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:38.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.522 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.522 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:38.522 00:28:38.522 Run status group 0 (all jobs): 00:28:38.522 READ: bw=723KiB/s (740kB/s), 723KiB/s-723KiB/s (740kB/s-740kB/s), io=7232KiB (7406kB), run=10002-10002msec 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 00:28:38.522 real 0m11.033s 00:28:38.522 user 0m16.390s 00:28:38.522 sys 0m0.781s 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 ************************************ 00:28:38.522 END TEST fio_dif_1_default 00:28:38.522 ************************************ 00:28:38.522 14:55:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:38.522 14:55:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:38.522 14:55:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:38.522 14:55:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 ************************************ 00:28:38.522 START TEST fio_dif_1_multi_subsystems 00:28:38.522 ************************************ 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 bdev_null0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 [2024-07-25 14:55:57.269415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 bdev_null1 00:28:38.522 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.523 { 00:28:38.523 "params": { 00:28:38.523 "name": "Nvme$subsystem", 00:28:38.523 "trtype": "$TEST_TRANSPORT", 00:28:38.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.523 "adrfam": "ipv4", 00:28:38.523 "trsvcid": "$NVMF_PORT", 00:28:38.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.523 "hdgst": ${hdgst:-false}, 00:28:38.523 "ddgst": ${ddgst:-false} 00:28:38.523 }, 00:28:38.523 "method": "bdev_nvme_attach_controller" 00:28:38.523 } 00:28:38.523 EOF 00:28:38.523 )") 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:38.523 { 00:28:38.523 "params": { 00:28:38.523 "name": "Nvme$subsystem", 00:28:38.523 "trtype": "$TEST_TRANSPORT", 00:28:38.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.523 "adrfam": "ipv4", 00:28:38.523 "trsvcid": "$NVMF_PORT", 00:28:38.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.523 "hdgst": ${hdgst:-false}, 00:28:38.523 "ddgst": ${ddgst:-false} 00:28:38.523 }, 00:28:38.523 "method": "bdev_nvme_attach_controller" 00:28:38.523 } 00:28:38.523 EOF 00:28:38.523 )") 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:38.523 "params": { 00:28:38.523 "name": "Nvme0", 00:28:38.523 "trtype": "tcp", 00:28:38.523 "traddr": "10.0.0.2", 00:28:38.523 "adrfam": "ipv4", 00:28:38.523 "trsvcid": "4420", 00:28:38.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.523 "hdgst": false, 00:28:38.523 "ddgst": false 00:28:38.523 }, 00:28:38.523 "method": "bdev_nvme_attach_controller" 00:28:38.523 },{ 00:28:38.523 "params": { 00:28:38.523 "name": "Nvme1", 00:28:38.523 "trtype": "tcp", 00:28:38.523 "traddr": "10.0.0.2", 00:28:38.523 "adrfam": "ipv4", 00:28:38.523 "trsvcid": "4420", 00:28:38.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.523 "hdgst": false, 00:28:38.523 "ddgst": false 00:28:38.523 }, 00:28:38.523 "method": "bdev_nvme_attach_controller" 00:28:38.523 }' 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:38.523 14:55:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.523 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:38.523 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:38.523 fio-3.35 00:28:38.523 Starting 2 threads 00:28:38.523 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.575 00:28:48.575 filename0: (groupid=0, jobs=1): err= 0: pid=2503124: Thu Jul 25 14:56:08 2024 00:28:48.575 read: IOPS=94, BW=378KiB/s (388kB/s)(3792KiB/10020msec) 00:28:48.575 slat (nsec): min=6031, max=28328, avg=7900.00, stdev=2753.72 00:28:48.575 clat (usec): min=41847, max=44760, avg=42254.21, stdev=490.40 00:28:48.575 lat (usec): min=41854, max=44788, avg=42262.11, stdev=490.73 00:28:48.575 clat percentiles (usec): 00:28:48.575 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:48.575 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:48.575 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:28:48.575 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:28:48.575 | 99.99th=[44827] 00:28:48.575 bw ( KiB/s): min= 352, max= 384, per=49.98%, avg=377.60, stdev=13.13, samples=20 00:28:48.575 iops : min= 88, max= 96, avg=94.40, stdev= 3.28, samples=20 00:28:48.575 lat (msec) : 50=100.00% 00:28:48.575 cpu : usr=97.70%, sys=2.05%, ctx=12, majf=0, minf=155 00:28:48.575 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.575 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.575 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:48.575 filename1: (groupid=0, jobs=1): err= 0: pid=2503125: Thu Jul 25 14:56:08 2024 00:28:48.575 read: IOPS=94, BW=376KiB/s (385kB/s)(3776KiB/10033msec) 00:28:48.575 slat (nsec): min=6041, max=26140, avg=7878.65, stdev=2637.87 00:28:48.575 clat (usec): min=41823, max=44058, avg=42488.76, stdev=543.92 00:28:48.575 lat (usec): min=41829, max=44071, avg=42496.64, stdev=544.08 00:28:48.575 clat percentiles (usec): 00:28:48.575 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:48.575 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:28:48.575 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:48.575 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:28:48.575 | 99.99th=[44303] 00:28:48.575 bw ( KiB/s): min= 352, max= 384, per=49.85%, avg=376.00, stdev=14.22, samples=20 00:28:48.575 iops : min= 88, max= 96, avg=94.00, stdev= 3.55, samples=20 00:28:48.575 lat (msec) : 50=100.00% 00:28:48.575 cpu : usr=97.70%, sys=2.05%, ctx=9, majf=0, minf=39 00:28:48.575 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:48.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.575 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.575 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:48.575 00:28:48.575 Run status group 0 (all jobs): 00:28:48.575 READ: bw=754KiB/s (772kB/s), 376KiB/s-378KiB/s (385kB/s-388kB/s), io=7568KiB (7750kB), run=10020-10033msec 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.575 00:28:48.575 real 0m11.396s 00:28:48.575 user 0m26.724s 00:28:48.575 sys 0m0.699s 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 ************************************ 00:28:48.575 END TEST fio_dif_1_multi_subsystems 00:28:48.575 ************************************ 00:28:48.575 14:56:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:48.575 14:56:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:48.575 14:56:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:48.575 14:56:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.575 14:56:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:48.575 ************************************ 00:28:48.575 START TEST fio_dif_rand_params 00:28:48.576 ************************************ 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.576 bdev_null0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:48.576 [2024-07-25 14:56:08.730071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:48.576 { 00:28:48.576 "params": { 00:28:48.576 "name": "Nvme$subsystem", 00:28:48.576 "trtype": "$TEST_TRANSPORT", 00:28:48.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.576 "adrfam": "ipv4", 00:28:48.576 "trsvcid": "$NVMF_PORT", 00:28:48.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.576 "hdgst": ${hdgst:-false}, 00:28:48.576 "ddgst": ${ddgst:-false} 00:28:48.576 }, 00:28:48.576 "method": "bdev_nvme_attach_controller" 00:28:48.576 } 00:28:48.576 EOF 00:28:48.576 )") 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:48.576 "params": { 00:28:48.576 "name": "Nvme0", 00:28:48.576 "trtype": "tcp", 00:28:48.576 "traddr": "10.0.0.2", 00:28:48.576 "adrfam": "ipv4", 00:28:48.576 "trsvcid": "4420", 00:28:48.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.576 "hdgst": false, 00:28:48.576 "ddgst": false 00:28:48.576 }, 00:28:48.576 "method": "bdev_nvme_attach_controller" 00:28:48.576 }' 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:48.576 14:56:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.836 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:48.836 ... 00:28:48.836 fio-3.35 00:28:48.836 Starting 3 threads 00:28:48.836 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.410 00:28:55.410 filename0: (groupid=0, jobs=1): err= 0: pid=2504961: Thu Jul 25 14:56:14 2024 00:28:55.410 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5004msec) 00:28:55.410 slat (nsec): min=3192, max=24764, avg=9368.94, stdev=2783.26 00:28:55.410 clat (usec): min=5656, max=59263, avg=11823.64, stdev=11202.92 00:28:55.410 lat (usec): min=5663, max=59276, avg=11833.01, stdev=11203.05 00:28:55.410 clat percentiles (usec): 00:28:55.410 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7439], 00:28:55.410 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:28:55.410 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[13566], 95.00th=[49546], 00:28:55.410 | 99.00th=[54789], 99.50th=[55837], 99.90th=[58459], 99.95th=[59507], 00:28:55.410 | 99.99th=[59507] 00:28:55.410 bw ( KiB/s): min=25344, max=41472, per=36.56%, avg=32384.00, stdev=5633.62, samples=10 00:28:55.410 iops : min= 198, max= 324, avg=253.00, stdev=44.01, samples=10 00:28:55.410 lat (msec) : 10=74.84%, 20=17.82%, 50=2.68%, 100=4.65% 00:28:55.410 cpu : usr=94.86%, sys=4.48%, ctx=9, majf=0, minf=122 00:28:55.410 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 issued rwts: total=1268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.410 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.410 filename0: (groupid=0, jobs=1): err= 0: pid=2504962: Thu Jul 25 14:56:14 2024 00:28:55.410 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5003msec) 00:28:55.410 slat (nsec): min=4295, max=17655, avg=9871.79, stdev=2543.52 00:28:55.410 clat (msec): min=5, max=101, avg=13.78, stdev=13.43 00:28:55.410 lat (msec): min=5, max=101, avg=13.79, stdev=13.43 00:28:55.410 clat percentiles (msec): 00:28:55.410 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:28:55.410 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:28:55.410 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 21], 95.00th=[ 53], 00:28:55.410 | 99.00th=[ 59], 99.50th=[ 60], 99.90th=[ 63], 99.95th=[ 102], 00:28:55.410 | 99.99th=[ 102] 00:28:55.410 bw ( KiB/s): min=19712, max=37632, per=31.36%, avg=27776.00, stdev=5930.82, samples=10 00:28:55.410 iops : min= 154, max= 294, avg=217.00, stdev=46.33, samples=10 00:28:55.410 lat (msec) : 10=63.88%, 20=26.10%, 50=2.02%, 100=7.90%, 250=0.09% 00:28:55.410 cpu : usr=95.72%, sys=3.76%, ctx=8, majf=0, minf=36 00:28:55.410 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 issued rwts: total=1088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.410 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.410 filename0: (groupid=0, jobs=1): err= 0: pid=2504963: Thu Jul 25 14:56:14 2024 00:28:55.410 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(140MiB/5017msec) 00:28:55.410 slat (nsec): min=6252, max=25056, avg=9737.28, stdev=2681.92 00:28:55.410 clat (usec): min=5603, max=95618, avg=13466.73, stdev=13284.68 00:28:55.410 lat (usec): min=5611, max=95630, avg=13476.47, stdev=13284.86 00:28:55.410 clat percentiles (usec): 00:28:55.410 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7635], 00:28:55.410 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:28:55.410 | 70.00th=[10290], 80.00th=[11994], 90.00th=[17695], 95.00th=[52167], 00:28:55.410 | 99.00th=[58459], 99.50th=[61080], 99.90th=[64750], 99.95th=[95945], 00:28:55.410 | 99.99th=[95945] 00:28:55.410 bw ( KiB/s): min=15360, max=43008, per=32.16%, avg=28492.80, stdev=8356.61, samples=10 00:28:55.410 iops : min= 120, max= 336, avg=222.60, stdev=65.29, samples=10 00:28:55.410 lat (msec) : 10=66.76%, 20=23.66%, 50=2.24%, 100=7.35% 00:28:55.410 cpu : usr=95.04%, sys=4.49%, ctx=8, majf=0, minf=121 00:28:55.410 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.410 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.410 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:55.410 00:28:55.410 Run status group 0 (all jobs): 00:28:55.410 READ: bw=86.5MiB/s (90.7MB/s), 27.2MiB/s-31.7MiB/s (28.5MB/s-33.2MB/s), io=434MiB (455MB), run=5003-5017msec 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 bdev_null0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 [2024-07-25 14:56:14.792347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 bdev_null1 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.410 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.411 bdev_null2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.411 { 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme$subsystem", 00:28:55.411 "trtype": "$TEST_TRANSPORT", 00:28:55.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "$NVMF_PORT", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.411 "hdgst": ${hdgst:-false}, 00:28:55.411 "ddgst": ${ddgst:-false} 00:28:55.411 }, 00:28:55.411 "method": "bdev_nvme_attach_controller" 00:28:55.411 } 00:28:55.411 EOF 00:28:55.411 )") 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.411 { 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme$subsystem", 00:28:55.411 "trtype": "$TEST_TRANSPORT", 00:28:55.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "$NVMF_PORT", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.411 "hdgst": ${hdgst:-false}, 00:28:55.411 "ddgst": ${ddgst:-false} 00:28:55.411 }, 00:28:55.411 "method": "bdev_nvme_attach_controller" 00:28:55.411 } 00:28:55.411 EOF 00:28:55.411 )") 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.411 { 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme$subsystem", 00:28:55.411 "trtype": "$TEST_TRANSPORT", 00:28:55.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "$NVMF_PORT", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.411 "hdgst": ${hdgst:-false}, 00:28:55.411 "ddgst": ${ddgst:-false} 00:28:55.411 }, 00:28:55.411 "method": "bdev_nvme_attach_controller" 00:28:55.411 } 00:28:55.411 EOF 00:28:55.411 )") 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:55.411 14:56:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme0", 00:28:55.411 "trtype": "tcp", 00:28:55.411 "traddr": "10.0.0.2", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "4420", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.411 "hdgst": false, 00:28:55.411 "ddgst": false 00:28:55.411 }, 00:28:55.411 "method": "bdev_nvme_attach_controller" 00:28:55.411 },{ 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme1", 00:28:55.411 "trtype": "tcp", 00:28:55.411 "traddr": "10.0.0.2", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "4420", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.411 "hdgst": false, 00:28:55.411 "ddgst": false 00:28:55.411 }, 00:28:55.411 "method": "bdev_nvme_attach_controller" 00:28:55.411 },{ 00:28:55.411 "params": { 00:28:55.411 "name": "Nvme2", 00:28:55.411 "trtype": "tcp", 00:28:55.411 "traddr": "10.0.0.2", 00:28:55.411 "adrfam": "ipv4", 00:28:55.411 "trsvcid": "4420", 00:28:55.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.411 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.411 "hdgst": false, 00:28:55.411 "ddgst": false 00:28:55.412 }, 00:28:55.412 "method": "bdev_nvme_attach_controller" 00:28:55.412 }' 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:55.412 14:56:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.412 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.412 ... 00:28:55.412 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.412 ... 00:28:55.412 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.412 ... 00:28:55.412 fio-3.35 00:28:55.412 Starting 24 threads 00:28:55.412 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.641 00:29:07.641 filename0: (groupid=0, jobs=1): err= 0: pid=2506224: Thu Jul 25 14:56:26 2024 00:29:07.641 read: IOPS=548, BW=2194KiB/s (2247kB/s)(21.5MiB/10028msec) 00:29:07.641 slat (nsec): min=6741, max=71826, avg=15320.59, stdev=7966.49 00:29:07.641 clat (usec): min=13955, max=58701, avg=29049.69, stdev=5935.36 00:29:07.641 lat (usec): min=13965, max=58716, avg=29065.01, stdev=5935.58 00:29:07.641 clat percentiles (usec): 00:29:07.641 | 1.00th=[16057], 5.00th=[21627], 10.00th=[23987], 20.00th=[24773], 00:29:07.641 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[30540], 00:29:07.641 | 70.00th=[32375], 80.00th=[33817], 90.00th=[35914], 95.00th=[39060], 00:29:07.641 | 99.00th=[49021], 99.50th=[49546], 99.90th=[52167], 99.95th=[53740], 00:29:07.641 | 99.99th=[58459] 00:29:07.641 bw ( KiB/s): min= 1968, max= 2384, per=4.02%, avg=2193.80, stdev=107.79, samples=20 00:29:07.641 iops : min= 492, max= 596, avg=548.45, stdev=26.95, samples=20 00:29:07.641 lat (msec) : 20=4.09%, 50=95.67%, 100=0.24% 00:29:07.641 cpu : usr=98.54%, sys=1.04%, ctx=17, majf=0, minf=53 00:29:07.641 IO depths : 1=0.6%, 2=1.5%, 4=8.7%, 8=76.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:07.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 issued rwts: total=5500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.641 filename0: (groupid=0, jobs=1): err= 0: pid=2506225: Thu Jul 25 14:56:26 2024 00:29:07.641 read: IOPS=553, BW=2213KiB/s (2266kB/s)(21.6MiB/10008msec) 00:29:07.641 slat (nsec): min=6242, max=84609, avg=19131.46, stdev=12141.58 00:29:07.641 clat (usec): min=7002, max=53803, avg=28813.87, stdev=5771.11 00:29:07.641 lat (usec): min=7014, max=53829, avg=28833.00, stdev=5770.83 00:29:07.641 clat percentiles (usec): 00:29:07.641 | 1.00th=[14353], 5.00th=[22152], 10.00th=[23987], 20.00th=[25035], 00:29:07.641 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[30278], 00:29:07.641 | 70.00th=[32375], 80.00th=[33817], 90.00th=[35390], 95.00th=[38536], 00:29:07.641 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[53740], 00:29:07.641 | 99.99th=[53740] 00:29:07.641 bw ( KiB/s): min= 1920, max= 2352, per=4.04%, avg=2208.50, stdev=101.18, samples=20 00:29:07.641 iops : min= 480, max= 588, avg=552.10, stdev=25.29, samples=20 00:29:07.641 lat (msec) : 10=0.25%, 20=3.50%, 50=95.90%, 100=0.34% 00:29:07.641 cpu : usr=98.45%, sys=1.14%, ctx=17, majf=0, minf=62 00:29:07.641 IO depths : 1=0.5%, 2=1.2%, 4=9.3%, 8=76.1%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:07.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.641 filename0: (groupid=0, jobs=1): err= 0: pid=2506226: Thu Jul 25 14:56:26 2024 00:29:07.641 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10006msec) 00:29:07.641 slat (nsec): min=6809, max=79312, avg=19876.66, stdev=12865.60 00:29:07.641 clat (usec): min=8342, max=52324, avg=28252.37, stdev=5310.72 00:29:07.641 lat (usec): min=8350, max=52363, avg=28272.25, stdev=5310.27 00:29:07.641 clat percentiles (usec): 00:29:07.641 | 1.00th=[14877], 5.00th=[23200], 10.00th=[23987], 20.00th=[24773], 00:29:07.641 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26084], 60.00th=[26870], 00:29:07.641 | 70.00th=[31589], 80.00th=[33162], 90.00th=[35390], 95.00th=[36963], 00:29:07.641 | 99.00th=[44303], 99.50th=[46924], 99.90th=[52167], 99.95th=[52167], 00:29:07.641 | 99.99th=[52167] 00:29:07.641 bw ( KiB/s): min= 2064, max= 2352, per=4.12%, avg=2252.60, stdev=84.38, samples=20 00:29:07.641 iops : min= 516, max= 588, avg=563.15, stdev=21.09, samples=20 00:29:07.641 lat (msec) : 10=0.18%, 20=2.67%, 50=96.79%, 100=0.35% 00:29:07.641 cpu : usr=98.46%, sys=1.05%, ctx=16, majf=0, minf=69 00:29:07.641 IO depths : 1=0.1%, 2=0.5%, 4=7.3%, 8=79.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:07.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 complete : 0=0.0%, 4=89.8%, 8=5.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 issued rwts: total=5645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.641 filename0: (groupid=0, jobs=1): err= 0: pid=2506227: Thu Jul 25 14:56:26 2024 00:29:07.641 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10006msec) 00:29:07.641 slat (nsec): min=6520, max=73940, avg=18882.45, stdev=11713.56 00:29:07.641 clat (usec): min=8536, max=48243, avg=27799.19, stdev=5117.21 00:29:07.641 lat (usec): min=8546, max=48266, avg=27818.08, stdev=5116.31 00:29:07.641 clat percentiles (usec): 00:29:07.641 | 1.00th=[15664], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:29:07.641 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:29:07.641 | 70.00th=[30540], 80.00th=[32900], 90.00th=[34866], 95.00th=[36439], 00:29:07.641 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:29:07.641 | 99.99th=[48497] 00:29:07.641 bw ( KiB/s): min= 1920, max= 2432, per=4.19%, avg=2288.10, stdev=121.27, samples=20 00:29:07.641 iops : min= 480, max= 608, avg=572.00, stdev=30.30, samples=20 00:29:07.641 lat (msec) : 10=0.24%, 20=3.85%, 50=95.90% 00:29:07.641 cpu : usr=98.54%, sys=1.04%, ctx=18, majf=0, minf=50 00:29:07.641 IO depths : 1=0.9%, 2=1.9%, 4=9.3%, 8=75.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:29:07.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.641 issued rwts: total=5737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.641 filename0: (groupid=0, jobs=1): err= 0: pid=2506228: Thu Jul 25 14:56:26 2024 00:29:07.641 read: IOPS=577, BW=2310KiB/s (2365kB/s)(22.6MiB/10006msec) 00:29:07.641 slat (nsec): min=6458, max=86897, avg=21536.08, stdev=14509.40 00:29:07.641 clat (usec): min=7776, max=58960, avg=27593.79, stdev=5112.36 00:29:07.641 lat (usec): min=7793, max=58976, avg=27615.33, stdev=5113.36 00:29:07.641 clat percentiles (usec): 00:29:07.641 | 1.00th=[15270], 5.00th=[22414], 10.00th=[23987], 20.00th=[24511], 00:29:07.641 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:07.641 | 70.00th=[27919], 80.00th=[32375], 90.00th=[34341], 95.00th=[36439], 00:29:07.641 | 99.00th=[43254], 99.50th=[46400], 99.90th=[49546], 99.95th=[58983], 00:29:07.641 | 99.99th=[58983] 00:29:07.641 bw ( KiB/s): min= 1851, max= 2432, per=4.22%, avg=2304.95, stdev=126.21, samples=20 00:29:07.641 iops : min= 462, max= 608, avg=576.20, stdev=31.70, samples=20 00:29:07.641 lat (msec) : 10=0.38%, 20=2.94%, 50=96.59%, 100=0.09% 00:29:07.642 cpu : usr=98.81%, sys=0.78%, ctx=22, majf=0, minf=47 00:29:07.642 IO depths : 1=0.3%, 2=0.6%, 4=7.4%, 8=78.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename0: (groupid=0, jobs=1): err= 0: pid=2506229: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10020msec) 00:29:07.642 slat (nsec): min=6889, max=82728, avg=16460.09, stdev=9429.47 00:29:07.642 clat (usec): min=11275, max=50485, avg=28563.76, stdev=5321.01 00:29:07.642 lat (usec): min=11288, max=50503, avg=28580.22, stdev=5321.37 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[16581], 5.00th=[22152], 10.00th=[23987], 20.00th=[24773], 00:29:07.642 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26608], 60.00th=[28443], 00:29:07.642 | 70.00th=[31589], 80.00th=[33424], 90.00th=[35390], 95.00th=[36963], 00:29:07.642 | 99.00th=[44303], 99.50th=[47449], 99.90th=[50070], 99.95th=[50594], 00:29:07.642 | 99.99th=[50594] 00:29:07.642 bw ( KiB/s): min= 1920, max= 2400, per=4.08%, avg=2229.60, stdev=106.55, samples=20 00:29:07.642 iops : min= 480, max= 600, avg=557.40, stdev=26.64, samples=20 00:29:07.642 lat (msec) : 20=3.49%, 50=96.37%, 100=0.14% 00:29:07.642 cpu : usr=98.40%, sys=1.16%, ctx=16, majf=0, minf=67 00:29:07.642 IO depths : 1=0.9%, 2=2.0%, 4=9.5%, 8=74.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename0: (groupid=0, jobs=1): err= 0: pid=2506230: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=615, BW=2461KiB/s (2520kB/s)(24.1MiB/10010msec) 00:29:07.642 slat (nsec): min=6890, max=86156, avg=14805.72, stdev=7551.28 00:29:07.642 clat (usec): min=11488, max=50691, avg=25908.66, stdev=3921.64 00:29:07.642 lat (usec): min=11497, max=50704, avg=25923.46, stdev=3922.26 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[15401], 5.00th=[19006], 10.00th=[23462], 20.00th=[24511], 00:29:07.642 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:07.642 | 70.00th=[26084], 80.00th=[26608], 90.00th=[31589], 95.00th=[33424], 00:29:07.642 | 99.00th=[37487], 99.50th=[41157], 99.90th=[46400], 99.95th=[47449], 00:29:07.642 | 99.99th=[50594] 00:29:07.642 bw ( KiB/s): min= 2256, max= 2768, per=4.50%, avg=2456.80, stdev=113.31, samples=20 00:29:07.642 iops : min= 564, max= 692, avg=614.20, stdev=28.33, samples=20 00:29:07.642 lat (msec) : 20=5.83%, 50=94.14%, 100=0.03% 00:29:07.642 cpu : usr=98.53%, sys=1.01%, ctx=15, majf=0, minf=66 00:29:07.642 IO depths : 1=1.9%, 2=4.1%, 4=13.1%, 8=69.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=91.5%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=6158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename0: (groupid=0, jobs=1): err= 0: pid=2506231: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=574, BW=2298KiB/s (2353kB/s)(22.5MiB/10020msec) 00:29:07.642 slat (nsec): min=6758, max=73500, avg=16947.35, stdev=10428.90 00:29:07.642 clat (usec): min=11725, max=45140, avg=27734.44, stdev=4828.36 00:29:07.642 lat (usec): min=11737, max=45148, avg=27751.39, stdev=4828.69 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[15926], 5.00th=[21890], 10.00th=[23725], 20.00th=[24773], 00:29:07.642 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:29:07.642 | 70.00th=[30278], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:29:07.642 | 99.00th=[41157], 99.50th=[42730], 99.90th=[44827], 99.95th=[45351], 00:29:07.642 | 99.99th=[45351] 00:29:07.642 bw ( KiB/s): min= 2152, max= 2488, per=4.21%, avg=2296.00, stdev=89.20, samples=20 00:29:07.642 iops : min= 538, max= 622, avg=574.00, stdev=22.30, samples=20 00:29:07.642 lat (msec) : 20=3.93%, 50=96.07% 00:29:07.642 cpu : usr=98.54%, sys=1.03%, ctx=15, majf=0, minf=69 00:29:07.642 IO depths : 1=0.8%, 2=1.7%, 4=9.2%, 8=75.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename1: (groupid=0, jobs=1): err= 0: pid=2506232: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=552, BW=2209KiB/s (2262kB/s)(21.6MiB/10005msec) 00:29:07.642 slat (nsec): min=6851, max=75994, avg=16408.77, stdev=11027.92 00:29:07.642 clat (usec): min=7817, max=59722, avg=28874.54, stdev=5843.84 00:29:07.642 lat (usec): min=7825, max=59742, avg=28890.95, stdev=5843.00 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[15270], 5.00th=[20317], 10.00th=[23987], 20.00th=[24773], 00:29:07.642 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26870], 60.00th=[30278], 00:29:07.642 | 70.00th=[32375], 80.00th=[33817], 90.00th=[35390], 95.00th=[38011], 00:29:07.642 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50594], 99.95th=[52167], 00:29:07.642 | 99.99th=[59507] 00:29:07.642 bw ( KiB/s): min= 1888, max= 2416, per=4.04%, avg=2208.40, stdev=131.90, samples=20 00:29:07.642 iops : min= 472, max= 604, avg=552.10, stdev=32.98, samples=20 00:29:07.642 lat (msec) : 10=0.11%, 20=4.65%, 50=94.93%, 100=0.31% 00:29:07.642 cpu : usr=98.76%, sys=0.83%, ctx=16, majf=0, minf=50 00:29:07.642 IO depths : 1=0.5%, 2=1.2%, 4=9.2%, 8=76.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename1: (groupid=0, jobs=1): err= 0: pid=2506233: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=568, BW=2275KiB/s (2329kB/s)(22.2MiB/10016msec) 00:29:07.642 slat (nsec): min=6817, max=74834, avg=16627.36, stdev=11519.57 00:29:07.642 clat (usec): min=11441, max=50874, avg=28035.31, stdev=5043.36 00:29:07.642 lat (usec): min=11449, max=50882, avg=28051.94, stdev=5042.15 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[16581], 5.00th=[22152], 10.00th=[23725], 20.00th=[24773], 00:29:07.642 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27395], 00:29:07.642 | 70.00th=[30278], 80.00th=[32375], 90.00th=[34341], 95.00th=[36439], 00:29:07.642 | 99.00th=[43779], 99.50th=[48497], 99.90th=[50594], 99.95th=[51119], 00:29:07.642 | 99.99th=[51119] 00:29:07.642 bw ( KiB/s): min= 2048, max= 2480, per=4.16%, avg=2272.00, stdev=128.05, samples=20 00:29:07.642 iops : min= 512, max= 620, avg=568.00, stdev=32.01, samples=20 00:29:07.642 lat (msec) : 20=3.60%, 50=96.26%, 100=0.14% 00:29:07.642 cpu : usr=98.43%, sys=1.06%, ctx=15, majf=0, minf=61 00:29:07.642 IO depths : 1=0.5%, 2=1.2%, 4=7.6%, 8=77.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename1: (groupid=0, jobs=1): err= 0: pid=2506234: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=504, BW=2016KiB/s (2065kB/s)(19.7MiB/10003msec) 00:29:07.642 slat (nsec): min=6126, max=68956, avg=13418.26, stdev=8804.08 00:29:07.642 clat (usec): min=10830, max=59167, avg=31670.02, stdev=6605.64 00:29:07.642 lat (usec): min=10846, max=59181, avg=31683.44, stdev=6606.20 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[17171], 5.00th=[22676], 10.00th=[24511], 20.00th=[25822], 00:29:07.642 | 30.00th=[27657], 40.00th=[30540], 50.00th=[31851], 60.00th=[33162], 00:29:07.642 | 70.00th=[34341], 80.00th=[35390], 90.00th=[38011], 95.00th=[44827], 00:29:07.642 | 99.00th=[51643], 99.50th=[52691], 99.90th=[56886], 99.95th=[58983], 00:29:07.642 | 99.99th=[58983] 00:29:07.642 bw ( KiB/s): min= 1776, max= 2280, per=3.66%, avg=2001.68, stdev=173.98, samples=19 00:29:07.642 iops : min= 444, max= 570, avg=500.42, stdev=43.49, samples=19 00:29:07.642 lat (msec) : 20=2.30%, 50=94.92%, 100=2.78% 00:29:07.642 cpu : usr=98.65%, sys=0.91%, ctx=15, majf=0, minf=75 00:29:07.642 IO depths : 1=0.3%, 2=0.7%, 4=6.3%, 8=78.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=89.8%, 8=6.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename1: (groupid=0, jobs=1): err= 0: pid=2506235: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=574, BW=2298KiB/s (2354kB/s)(22.5MiB/10019msec) 00:29:07.642 slat (nsec): min=3220, max=80311, avg=15237.97, stdev=8444.66 00:29:07.642 clat (usec): min=10729, max=51048, avg=27756.96, stdev=5090.25 00:29:07.642 lat (usec): min=10736, max=51066, avg=27772.19, stdev=5090.46 00:29:07.642 clat percentiles (usec): 00:29:07.642 | 1.00th=[15926], 5.00th=[22152], 10.00th=[23725], 20.00th=[24773], 00:29:07.642 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:29:07.642 | 70.00th=[29492], 80.00th=[32375], 90.00th=[34866], 95.00th=[36963], 00:29:07.642 | 99.00th=[42206], 99.50th=[45351], 99.90th=[50594], 99.95th=[51119], 00:29:07.642 | 99.99th=[51119] 00:29:07.642 bw ( KiB/s): min= 2176, max= 2432, per=4.21%, avg=2296.40, stdev=83.48, samples=20 00:29:07.642 iops : min= 544, max= 608, avg=574.10, stdev=20.87, samples=20 00:29:07.642 lat (msec) : 20=4.13%, 50=95.73%, 100=0.14% 00:29:07.642 cpu : usr=98.37%, sys=1.22%, ctx=23, majf=0, minf=55 00:29:07.642 IO depths : 1=0.4%, 2=0.9%, 4=7.3%, 8=78.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:07.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.642 issued rwts: total=5757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.642 filename1: (groupid=0, jobs=1): err= 0: pid=2506236: Thu Jul 25 14:56:26 2024 00:29:07.642 read: IOPS=599, BW=2400KiB/s (2457kB/s)(23.5MiB/10016msec) 00:29:07.643 slat (nsec): min=4204, max=82140, avg=13579.28, stdev=7012.15 00:29:07.643 clat (usec): min=11410, max=50627, avg=26591.93, stdev=4780.16 00:29:07.643 lat (usec): min=11425, max=50639, avg=26605.51, stdev=4780.75 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[14615], 5.00th=[19792], 10.00th=[23200], 20.00th=[24249], 00:29:07.643 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:07.643 | 70.00th=[26608], 80.00th=[29230], 90.00th=[33424], 95.00th=[35390], 00:29:07.643 | 99.00th=[42206], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:29:07.643 | 99.99th=[50594] 00:29:07.643 bw ( KiB/s): min= 2160, max= 2536, per=4.39%, avg=2397.40, stdev=87.62, samples=20 00:29:07.643 iops : min= 540, max= 634, avg=599.35, stdev=21.90, samples=20 00:29:07.643 lat (msec) : 20=5.46%, 50=94.43%, 100=0.12% 00:29:07.643 cpu : usr=98.55%, sys=1.04%, ctx=18, majf=0, minf=79 00:29:07.643 IO depths : 1=0.4%, 2=0.9%, 4=6.6%, 8=77.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=6009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename1: (groupid=0, jobs=1): err= 0: pid=2506237: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=628, BW=2515KiB/s (2576kB/s)(24.6MiB/10007msec) 00:29:07.643 slat (nsec): min=6798, max=68306, avg=14569.46, stdev=6271.20 00:29:07.643 clat (usec): min=11297, max=43759, avg=25318.99, stdev=1960.02 00:29:07.643 lat (usec): min=11311, max=43795, avg=25333.55, stdev=1959.95 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[17171], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:29:07.643 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:29:07.643 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27132], 00:29:07.643 | 99.00th=[32375], 99.50th=[34341], 99.90th=[36963], 99.95th=[37487], 00:29:07.643 | 99.99th=[43779] 00:29:07.643 bw ( KiB/s): min= 2360, max= 2560, per=4.60%, avg=2510.80, stdev=68.88, samples=20 00:29:07.643 iops : min= 590, max= 640, avg=627.70, stdev=17.22, samples=20 00:29:07.643 lat (msec) : 20=1.51%, 50=98.49% 00:29:07.643 cpu : usr=98.80%, sys=0.79%, ctx=18, majf=0, minf=71 00:29:07.643 IO depths : 1=5.7%, 2=11.5%, 4=23.6%, 8=52.4%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=6293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename1: (groupid=0, jobs=1): err= 0: pid=2506238: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=584, BW=2339KiB/s (2396kB/s)(22.9MiB/10011msec) 00:29:07.643 slat (nsec): min=6837, max=79554, avg=15149.16, stdev=7939.40 00:29:07.643 clat (usec): min=9976, max=51763, avg=27264.81, stdev=4867.61 00:29:07.643 lat (usec): min=9989, max=51778, avg=27279.96, stdev=4868.34 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[15533], 5.00th=[20317], 10.00th=[23725], 20.00th=[24511], 00:29:07.643 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:07.643 | 70.00th=[27395], 80.00th=[31851], 90.00th=[34341], 95.00th=[35914], 00:29:07.643 | 99.00th=[41157], 99.50th=[42730], 99.90th=[49546], 99.95th=[51643], 00:29:07.643 | 99.99th=[51643] 00:29:07.643 bw ( KiB/s): min= 2144, max= 2424, per=4.28%, avg=2335.60, stdev=76.53, samples=20 00:29:07.643 iops : min= 536, max= 606, avg=583.90, stdev=19.13, samples=20 00:29:07.643 lat (msec) : 10=0.02%, 20=4.59%, 50=95.32%, 100=0.07% 00:29:07.643 cpu : usr=98.46%, sys=1.12%, ctx=18, majf=0, minf=53 00:29:07.643 IO depths : 1=0.8%, 2=1.7%, 4=8.8%, 8=76.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=5855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename1: (groupid=0, jobs=1): err= 0: pid=2506239: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=609, BW=2437KiB/s (2495kB/s)(23.8MiB/10011msec) 00:29:07.643 slat (nsec): min=6865, max=84588, avg=17188.52, stdev=10466.87 00:29:07.643 clat (usec): min=11324, max=48983, avg=26157.60, stdev=4354.20 00:29:07.643 lat (usec): min=11341, max=48991, avg=26174.79, stdev=4353.63 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[14615], 5.00th=[19006], 10.00th=[23200], 20.00th=[24249], 00:29:07.643 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:29:07.643 | 70.00th=[26346], 80.00th=[27132], 90.00th=[32375], 95.00th=[34866], 00:29:07.643 | 99.00th=[40109], 99.50th=[41681], 99.90th=[45351], 99.95th=[47449], 00:29:07.643 | 99.99th=[49021] 00:29:07.643 bw ( KiB/s): min= 2080, max= 2640, per=4.45%, avg=2432.80, stdev=130.86, samples=20 00:29:07.643 iops : min= 520, max= 660, avg=608.20, stdev=32.72, samples=20 00:29:07.643 lat (msec) : 20=5.64%, 50=94.36% 00:29:07.643 cpu : usr=98.48%, sys=1.00%, ctx=16, majf=0, minf=70 00:29:07.643 IO depths : 1=0.7%, 2=1.7%, 4=12.3%, 8=71.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=92.3%, 8=2.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=6098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename2: (groupid=0, jobs=1): err= 0: pid=2506240: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=534, BW=2136KiB/s (2187kB/s)(20.9MiB/10014msec) 00:29:07.643 slat (usec): min=6, max=882, avg=26.37, stdev=21.17 00:29:07.643 clat (usec): min=12207, max=55873, avg=29818.57, stdev=6052.95 00:29:07.643 lat (usec): min=12236, max=55897, avg=29844.94, stdev=6053.30 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[15270], 5.00th=[21627], 10.00th=[23987], 20.00th=[25035], 00:29:07.643 | 30.00th=[25822], 40.00th=[26346], 50.00th=[29754], 60.00th=[31851], 00:29:07.643 | 70.00th=[33162], 80.00th=[34341], 90.00th=[36439], 95.00th=[39060], 00:29:07.643 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[55837], 00:29:07.643 | 99.99th=[55837] 00:29:07.643 bw ( KiB/s): min= 1968, max= 2304, per=3.89%, avg=2125.47, stdev=102.59, samples=19 00:29:07.643 iops : min= 492, max= 576, avg=531.37, stdev=25.65, samples=19 00:29:07.643 lat (msec) : 20=3.52%, 50=95.66%, 100=0.82% 00:29:07.643 cpu : usr=91.57%, sys=3.40%, ctx=149, majf=0, minf=41 00:29:07.643 IO depths : 1=0.1%, 2=0.4%, 4=6.5%, 8=78.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=5348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename2: (groupid=0, jobs=1): err= 0: pid=2506241: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.3MiB/10022msec) 00:29:07.643 slat (nsec): min=6865, max=75129, avg=19145.95, stdev=11671.10 00:29:07.643 clat (usec): min=10297, max=51637, avg=27997.07, stdev=5324.84 00:29:07.643 lat (usec): min=10310, max=51651, avg=28016.22, stdev=5324.31 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[15795], 5.00th=[21365], 10.00th=[23725], 20.00th=[24773], 00:29:07.643 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26870], 00:29:07.643 | 70.00th=[30802], 80.00th=[32900], 90.00th=[34866], 95.00th=[36963], 00:29:07.643 | 99.00th=[44827], 99.50th=[48497], 99.90th=[51119], 99.95th=[51643], 00:29:07.643 | 99.99th=[51643] 00:29:07.643 bw ( KiB/s): min= 2144, max= 2408, per=4.17%, avg=2275.20, stdev=81.93, samples=20 00:29:07.643 iops : min= 536, max= 602, avg=568.80, stdev=20.48, samples=20 00:29:07.643 lat (msec) : 20=4.40%, 50=95.46%, 100=0.14% 00:29:07.643 cpu : usr=98.41%, sys=1.17%, ctx=14, majf=0, minf=65 00:29:07.643 IO depths : 1=0.5%, 2=1.2%, 4=8.0%, 8=77.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=5704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename2: (groupid=0, jobs=1): err= 0: pid=2506242: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=541, BW=2165KiB/s (2217kB/s)(21.2MiB/10006msec) 00:29:07.643 slat (nsec): min=6505, max=77763, avg=18716.76, stdev=12171.10 00:29:07.643 clat (usec): min=8341, max=54019, avg=29452.31, stdev=5974.61 00:29:07.643 lat (usec): min=8356, max=54036, avg=29471.02, stdev=5973.31 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[14615], 5.00th=[21365], 10.00th=[24249], 20.00th=[25035], 00:29:07.643 | 30.00th=[25822], 40.00th=[26346], 50.00th=[28181], 60.00th=[31589], 00:29:07.643 | 70.00th=[32900], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:29:07.643 | 99.00th=[46924], 99.50th=[49546], 99.90th=[53740], 99.95th=[54264], 00:29:07.643 | 99.99th=[54264] 00:29:07.643 bw ( KiB/s): min= 1843, max= 2390, per=3.95%, avg=2159.65, stdev=121.94, samples=20 00:29:07.643 iops : min= 460, max= 597, avg=539.85, stdev=30.54, samples=20 00:29:07.643 lat (msec) : 10=0.26%, 20=4.28%, 50=95.01%, 100=0.44% 00:29:07.643 cpu : usr=98.48%, sys=1.12%, ctx=15, majf=0, minf=53 00:29:07.643 IO depths : 1=0.7%, 2=1.7%, 4=9.4%, 8=74.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:29:07.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 complete : 0=0.0%, 4=90.8%, 8=4.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.643 issued rwts: total=5416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.643 filename2: (groupid=0, jobs=1): err= 0: pid=2506243: Thu Jul 25 14:56:26 2024 00:29:07.643 read: IOPS=578, BW=2315KiB/s (2371kB/s)(22.6MiB/10007msec) 00:29:07.643 slat (nsec): min=6336, max=72946, avg=15220.38, stdev=10800.60 00:29:07.643 clat (usec): min=7965, max=56329, avg=27560.38, stdev=5215.94 00:29:07.643 lat (usec): min=7980, max=56347, avg=27575.60, stdev=5214.69 00:29:07.643 clat percentiles (usec): 00:29:07.643 | 1.00th=[14484], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:29:07.643 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:29:07.643 | 70.00th=[28705], 80.00th=[32113], 90.00th=[34341], 95.00th=[36963], 00:29:07.643 | 99.00th=[43779], 99.50th=[46924], 99.90th=[49546], 99.95th=[56361], 00:29:07.643 | 99.99th=[56361] 00:29:07.643 bw ( KiB/s): min= 2104, max= 2480, per=4.23%, avg=2311.50, stdev=90.38, samples=20 00:29:07.644 iops : min= 526, max= 620, avg=577.85, stdev=22.59, samples=20 00:29:07.644 lat (msec) : 10=0.24%, 20=4.39%, 50=95.32%, 100=0.05% 00:29:07.644 cpu : usr=98.58%, sys=0.96%, ctx=14, majf=0, minf=49 00:29:07.644 IO depths : 1=0.3%, 2=1.0%, 4=7.4%, 8=77.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:29:07.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.644 filename2: (groupid=0, jobs=1): err= 0: pid=2506244: Thu Jul 25 14:56:26 2024 00:29:07.644 read: IOPS=564, BW=2260KiB/s (2314kB/s)(22.1MiB/10014msec) 00:29:07.644 slat (nsec): min=4270, max=80645, avg=18255.21, stdev=11207.40 00:29:07.644 clat (usec): min=12231, max=50811, avg=28201.99, stdev=5451.41 00:29:07.644 lat (usec): min=12245, max=50824, avg=28220.24, stdev=5451.52 00:29:07.644 clat percentiles (usec): 00:29:07.644 | 1.00th=[15139], 5.00th=[21365], 10.00th=[23987], 20.00th=[24773], 00:29:07.644 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[27132], 00:29:07.644 | 70.00th=[31589], 80.00th=[33424], 90.00th=[35390], 95.00th=[36963], 00:29:07.644 | 99.00th=[46400], 99.50th=[48497], 99.90th=[50070], 99.95th=[50594], 00:29:07.644 | 99.99th=[50594] 00:29:07.644 bw ( KiB/s): min= 2128, max= 2384, per=4.13%, avg=2256.60, stdev=80.40, samples=20 00:29:07.644 iops : min= 532, max= 596, avg=564.15, stdev=20.10, samples=20 00:29:07.644 lat (msec) : 20=4.28%, 50=95.65%, 100=0.07% 00:29:07.644 cpu : usr=98.57%, sys=1.01%, ctx=17, majf=0, minf=84 00:29:07.644 IO depths : 1=0.8%, 2=1.7%, 4=8.5%, 8=76.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:07.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 issued rwts: total=5657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.644 filename2: (groupid=0, jobs=1): err= 0: pid=2506245: Thu Jul 25 14:56:26 2024 00:29:07.644 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10010msec) 00:29:07.644 slat (nsec): min=6850, max=82139, avg=14806.26, stdev=7144.22 00:29:07.644 clat (usec): min=12676, max=50810, avg=28221.36, stdev=5190.88 00:29:07.644 lat (usec): min=12694, max=50819, avg=28236.17, stdev=5190.85 00:29:07.644 clat percentiles (usec): 00:29:07.644 | 1.00th=[17171], 5.00th=[21365], 10.00th=[23987], 20.00th=[24773], 00:29:07.644 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[27132], 00:29:07.644 | 70.00th=[31065], 80.00th=[32900], 90.00th=[34866], 95.00th=[37487], 00:29:07.644 | 99.00th=[42730], 99.50th=[46400], 99.90th=[49546], 99.95th=[50594], 00:29:07.644 | 99.99th=[50594] 00:29:07.644 bw ( KiB/s): min= 2016, max= 2448, per=4.13%, avg=2256.80, stdev=100.92, samples=20 00:29:07.644 iops : min= 504, max= 612, avg=564.20, stdev=25.23, samples=20 00:29:07.644 lat (msec) : 20=3.85%, 50=96.08%, 100=0.07% 00:29:07.644 cpu : usr=98.49%, sys=1.10%, ctx=18, majf=0, minf=54 00:29:07.644 IO depths : 1=0.4%, 2=0.9%, 4=7.5%, 8=77.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:29:07.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 issued rwts: total=5658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.644 filename2: (groupid=0, jobs=1): err= 0: pid=2506246: Thu Jul 25 14:56:26 2024 00:29:07.644 read: IOPS=576, BW=2304KiB/s (2359kB/s)(22.5MiB/10015msec) 00:29:07.644 slat (nsec): min=4126, max=82038, avg=18526.39, stdev=11782.54 00:29:07.644 clat (usec): min=12131, max=50834, avg=27671.29, stdev=5059.42 00:29:07.644 lat (usec): min=12138, max=50843, avg=27689.81, stdev=5058.89 00:29:07.644 clat percentiles (usec): 00:29:07.644 | 1.00th=[15795], 5.00th=[20841], 10.00th=[23725], 20.00th=[24773], 00:29:07.644 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:29:07.644 | 70.00th=[29492], 80.00th=[32375], 90.00th=[34341], 95.00th=[36439], 00:29:07.644 | 99.00th=[42730], 99.50th=[44827], 99.90th=[50594], 99.95th=[50594], 00:29:07.644 | 99.99th=[50594] 00:29:07.644 bw ( KiB/s): min= 2152, max= 2488, per=4.21%, avg=2301.20, stdev=91.22, samples=20 00:29:07.644 iops : min= 538, max= 622, avg=575.30, stdev=22.81, samples=20 00:29:07.644 lat (msec) : 20=4.56%, 50=95.30%, 100=0.14% 00:29:07.644 cpu : usr=98.60%, sys=0.97%, ctx=11, majf=0, minf=91 00:29:07.644 IO depths : 1=0.6%, 2=1.2%, 4=7.6%, 8=77.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:07.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 issued rwts: total=5769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.644 filename2: (groupid=0, jobs=1): err= 0: pid=2506247: Thu Jul 25 14:56:26 2024 00:29:07.644 read: IOPS=556, BW=2225KiB/s (2278kB/s)(21.7MiB/10006msec) 00:29:07.644 slat (nsec): min=6878, max=76728, avg=22109.81, stdev=15152.31 00:29:07.644 clat (usec): min=7372, max=53612, avg=28646.53, stdev=5491.80 00:29:07.644 lat (usec): min=7387, max=53624, avg=28668.64, stdev=5491.17 00:29:07.644 clat percentiles (usec): 00:29:07.644 | 1.00th=[16450], 5.00th=[23200], 10.00th=[24249], 20.00th=[24773], 00:29:07.644 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[28705], 00:29:07.644 | 70.00th=[32113], 80.00th=[33817], 90.00th=[35390], 95.00th=[36963], 00:29:07.644 | 99.00th=[45351], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:29:07.644 | 99.99th=[53740] 00:29:07.644 bw ( KiB/s): min= 2000, max= 2400, per=4.07%, avg=2222.20, stdev=105.64, samples=20 00:29:07.644 iops : min= 500, max= 600, avg=555.55, stdev=26.41, samples=20 00:29:07.644 lat (msec) : 10=0.20%, 20=3.32%, 50=96.03%, 100=0.45% 00:29:07.644 cpu : usr=98.73%, sys=0.86%, ctx=15, majf=0, minf=60 00:29:07.644 IO depths : 1=0.1%, 2=0.5%, 4=7.3%, 8=78.6%, 16=13.4%, 32=0.0%, >=64=0.0% 00:29:07.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.644 issued rwts: total=5565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:07.644 00:29:07.644 Run status group 0 (all jobs): 00:29:07.644 READ: bw=53.3MiB/s (55.9MB/s), 2016KiB/s-2515KiB/s (2065kB/s-2576kB/s), io=535MiB (561MB), run=10003-10028msec 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:07.644 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 bdev_null0 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 [2024-07-25 14:56:26.568916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 bdev_null1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:07.645 { 00:29:07.645 "params": { 00:29:07.645 "name": "Nvme$subsystem", 00:29:07.645 "trtype": "$TEST_TRANSPORT", 00:29:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.645 "adrfam": "ipv4", 00:29:07.645 "trsvcid": "$NVMF_PORT", 00:29:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.645 "hdgst": ${hdgst:-false}, 00:29:07.645 "ddgst": ${ddgst:-false} 00:29:07.645 }, 00:29:07.645 "method": "bdev_nvme_attach_controller" 00:29:07.645 } 00:29:07.645 EOF 00:29:07.645 )") 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:07.645 { 00:29:07.645 "params": { 00:29:07.645 "name": "Nvme$subsystem", 00:29:07.645 "trtype": "$TEST_TRANSPORT", 00:29:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.645 "adrfam": "ipv4", 00:29:07.645 "trsvcid": "$NVMF_PORT", 00:29:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.645 "hdgst": ${hdgst:-false}, 00:29:07.645 "ddgst": ${ddgst:-false} 00:29:07.645 }, 00:29:07.645 "method": "bdev_nvme_attach_controller" 00:29:07.645 } 00:29:07.645 EOF 00:29:07.645 )") 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:07.645 "params": { 00:29:07.645 "name": "Nvme0", 00:29:07.645 "trtype": "tcp", 00:29:07.645 "traddr": "10.0.0.2", 00:29:07.645 "adrfam": "ipv4", 00:29:07.645 "trsvcid": "4420", 00:29:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:07.645 "hdgst": false, 00:29:07.645 "ddgst": false 00:29:07.645 }, 00:29:07.645 "method": "bdev_nvme_attach_controller" 00:29:07.645 },{ 00:29:07.645 "params": { 00:29:07.645 "name": "Nvme1", 00:29:07.645 "trtype": "tcp", 00:29:07.645 "traddr": "10.0.0.2", 00:29:07.645 "adrfam": "ipv4", 00:29:07.645 "trsvcid": "4420", 00:29:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.645 "hdgst": false, 00:29:07.645 "ddgst": false 00:29:07.645 }, 00:29:07.645 "method": "bdev_nvme_attach_controller" 00:29:07.645 }' 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:07.645 14:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.645 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:07.645 ... 00:29:07.645 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:07.645 ... 00:29:07.645 fio-3.35 00:29:07.645 Starting 4 threads 00:29:07.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.945 00:29:12.945 filename0: (groupid=0, jobs=1): err= 0: pid=2508059: Thu Jul 25 14:56:32 2024 00:29:12.945 read: IOPS=2750, BW=21.5MiB/s (22.5MB/s)(107MiB/5002msec) 00:29:12.945 slat (nsec): min=6155, max=26757, avg=8567.94, stdev=2713.41 00:29:12.945 clat (usec): min=1532, max=43885, avg=2886.38, stdev=1108.18 00:29:12.945 lat (usec): min=1544, max=43898, avg=2894.95, stdev=1108.18 00:29:12.945 clat percentiles (usec): 00:29:12.945 | 1.00th=[ 1860], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2474], 00:29:12.945 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2966], 00:29:12.945 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3589], 00:29:12.945 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[13173], 99.95th=[43779], 00:29:12.945 | 99.99th=[43779] 00:29:12.945 bw ( KiB/s): min=19840, max=23056, per=27.13%, avg=22003.20, stdev=900.68, samples=10 00:29:12.945 iops : min= 2480, max= 2882, avg=2750.40, stdev=112.58, samples=10 00:29:12.945 lat (msec) : 2=2.22%, 4=96.53%, 10=1.13%, 20=0.06%, 50=0.06% 00:29:12.945 cpu : usr=96.52%, sys=3.12%, ctx=7, majf=0, minf=0 00:29:12.945 IO depths : 1=0.1%, 2=1.0%, 4=66.9%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 issued rwts: total=13757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:12.945 filename0: (groupid=0, jobs=1): err= 0: pid=2508060: Thu Jul 25 14:56:32 2024 00:29:12.945 read: IOPS=2688, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:29:12.945 slat (nsec): min=6147, max=25051, avg=8661.45, stdev=2734.01 00:29:12.945 clat (usec): min=1351, max=7773, avg=2953.34, stdev=482.10 00:29:12.945 lat (usec): min=1358, max=7793, avg=2962.00, stdev=482.13 00:29:12.945 clat percentiles (usec): 00:29:12.945 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540], 00:29:12.945 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3064], 00:29:12.945 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3556], 95.00th=[ 3720], 00:29:12.945 | 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[ 5145], 99.95th=[ 7046], 00:29:12.945 | 99.99th=[ 7701] 00:29:12.945 bw ( KiB/s): min=20752, max=22144, per=26.52%, avg=21506.60, stdev=531.76, samples=10 00:29:12.945 iops : min= 2594, max= 2768, avg=2688.30, stdev=66.49, samples=10 00:29:12.945 lat (msec) : 2=1.25%, 4=96.74%, 10=2.01% 00:29:12.945 cpu : usr=95.88%, sys=3.78%, ctx=7, majf=0, minf=9 00:29:12.945 IO depths : 1=0.2%, 2=1.4%, 4=66.1%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 issued rwts: total=13447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:12.945 filename1: (groupid=0, jobs=1): err= 0: pid=2508061: Thu Jul 25 14:56:32 2024 00:29:12.945 read: IOPS=2721, BW=21.3MiB/s (22.3MB/s)(107MiB/5042msec) 00:29:12.945 slat (nsec): min=6190, max=24765, avg=8614.07, stdev=2726.92 00:29:12.945 clat (usec): min=1447, max=44346, avg=2908.17, stdev=908.15 00:29:12.945 lat (usec): min=1457, max=44352, avg=2916.79, stdev=908.12 00:29:12.945 clat percentiles (usec): 00:29:12.945 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:29:12.945 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:29:12.945 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3720], 00:29:12.945 | 99.00th=[ 4178], 99.50th=[ 4490], 99.90th=[ 4883], 99.95th=[ 8029], 00:29:12.945 | 99.99th=[44303] 00:29:12.945 bw ( KiB/s): min=21456, max=23216, per=27.06%, avg=21945.60, stdev=495.81, samples=10 00:29:12.945 iops : min= 2682, max= 2902, avg=2743.20, stdev=61.98, samples=10 00:29:12.945 lat (msec) : 2=1.60%, 4=96.41%, 10=1.95%, 50=0.04% 00:29:12.945 cpu : usr=96.55%, sys=3.13%, ctx=6, majf=0, minf=0 00:29:12.945 IO depths : 1=0.1%, 2=1.5%, 4=66.6%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 issued rwts: total=13721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:12.945 filename1: (groupid=0, jobs=1): err= 0: pid=2508062: Thu Jul 25 14:56:32 2024 00:29:12.945 read: IOPS=2035, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:29:12.945 slat (nsec): min=6186, max=28783, avg=8609.72, stdev=2688.30 00:29:12.945 clat (usec): min=1768, max=15850, avg=3906.06, stdev=786.59 00:29:12.945 lat (usec): min=1775, max=15874, avg=3914.67, stdev=786.59 00:29:12.945 clat percentiles (usec): 00:29:12.945 | 1.00th=[ 2376], 5.00th=[ 2802], 10.00th=[ 3064], 20.00th=[ 3326], 00:29:12.945 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3818], 60.00th=[ 4015], 00:29:12.945 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5145], 00:29:12.945 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 9241], 99.95th=[12649], 00:29:12.945 | 99.99th=[15795] 00:29:12.945 bw ( KiB/s): min=15744, max=16848, per=20.13%, avg=16321.78, stdev=330.86, samples=9 00:29:12.945 iops : min= 1968, max= 2106, avg=2040.22, stdev=41.36, samples=9 00:29:12.945 lat (msec) : 2=0.09%, 4=59.54%, 10=40.29%, 20=0.08% 00:29:12.945 cpu : usr=96.94%, sys=2.72%, ctx=6, majf=0, minf=9 00:29:12.945 IO depths : 1=0.2%, 2=2.3%, 4=66.0%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:12.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.945 issued rwts: total=10183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:12.945 00:29:12.945 Run status group 0 (all jobs): 00:29:12.945 READ: bw=79.2MiB/s (83.0MB/s), 15.9MiB/s-21.5MiB/s (16.7MB/s-22.5MB/s), io=399MiB (419MB), run=5002-5042msec 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.945 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.945 00:29:12.946 real 0m24.112s 00:29:12.946 user 4m51.112s 00:29:12.946 sys 0m4.849s 00:29:12.946 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 ************************************ 00:29:12.946 END TEST fio_dif_rand_params 00:29:12.946 ************************************ 00:29:12.946 14:56:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:12.946 14:56:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:12.946 14:56:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:12.946 14:56:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 ************************************ 00:29:12.946 START TEST fio_dif_digest 00:29:12.946 ************************************ 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 bdev_null0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.946 [2024-07-25 14:56:32.913142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:12.946 { 00:29:12.946 "params": { 00:29:12.946 "name": "Nvme$subsystem", 00:29:12.946 "trtype": "$TEST_TRANSPORT", 00:29:12.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.946 "adrfam": "ipv4", 00:29:12.946 "trsvcid": "$NVMF_PORT", 00:29:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.946 "hdgst": ${hdgst:-false}, 00:29:12.946 "ddgst": ${ddgst:-false} 00:29:12.946 }, 00:29:12.946 "method": "bdev_nvme_attach_controller" 00:29:12.946 } 00:29:12.946 EOF 00:29:12.946 )") 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:12.946 "params": { 00:29:12.946 "name": "Nvme0", 00:29:12.946 "trtype": "tcp", 00:29:12.946 "traddr": "10.0.0.2", 00:29:12.946 "adrfam": "ipv4", 00:29:12.946 "trsvcid": "4420", 00:29:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:12.946 "hdgst": true, 00:29:12.946 "ddgst": true 00:29:12.946 }, 00:29:12.946 "method": "bdev_nvme_attach_controller" 00:29:12.946 }' 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:12.946 14:56:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.206 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:13.206 ... 00:29:13.206 fio-3.35 00:29:13.206 Starting 3 threads 00:29:13.206 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.424 00:29:25.424 filename0: (groupid=0, jobs=1): err= 0: pid=2509248: Thu Jul 25 14:56:43 2024 00:29:25.424 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(421MiB/10047msec) 00:29:25.424 slat (nsec): min=6576, max=47905, avg=15021.75, stdev=7639.17 00:29:25.424 clat (usec): min=5430, max=57188, avg=8918.74, stdev=3049.31 00:29:25.425 lat (usec): min=5438, max=57199, avg=8933.76, stdev=3050.76 00:29:25.425 clat percentiles (usec): 00:29:25.425 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 7177], 00:29:25.425 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:29:25.425 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:29:25.425 | 99.00th=[12649], 99.50th=[14091], 99.90th=[54264], 99.95th=[56361], 00:29:25.425 | 99.99th=[57410] 00:29:25.425 bw ( KiB/s): min=35840, max=47872, per=47.70%, avg=43084.80, stdev=3093.37, samples=20 00:29:25.425 iops : min= 280, max= 374, avg=336.60, stdev=24.17, samples=20 00:29:25.425 lat (msec) : 10=72.00%, 20=27.58%, 50=0.12%, 100=0.30% 00:29:25.425 cpu : usr=96.13%, sys=3.31%, ctx=24, majf=0, minf=210 00:29:25.425 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 issued rwts: total=3368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=2509249: Thu Jul 25 14:56:43 2024 00:29:25.425 read: IOPS=182, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:29:25.425 slat (nsec): min=6651, max=42744, avg=20754.58, stdev=6815.05 00:29:25.425 clat (msec): min=6, max=103, avg=16.35, stdev=14.47 00:29:25.425 lat (msec): min=6, max=103, avg=16.38, stdev=14.47 00:29:25.425 clat percentiles (msec): 00:29:25.425 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:29:25.425 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:29:25.425 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 52], 95.00th=[ 56], 00:29:25.425 | 99.00th=[ 59], 99.50th=[ 61], 99.90th=[ 103], 99.95th=[ 104], 00:29:25.425 | 99.99th=[ 104] 00:29:25.425 bw ( KiB/s): min=17408, max=30464, per=26.00%, avg=23488.00, stdev=3509.97, samples=20 00:29:25.425 iops : min= 136, max= 238, avg=183.50, stdev=27.42, samples=20 00:29:25.425 lat (msec) : 10=20.84%, 20=67.68%, 50=0.38%, 100=10.99%, 250=0.11% 00:29:25.425 cpu : usr=97.11%, sys=2.49%, ctx=19, majf=0, minf=97 00:29:25.425 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=2509250: Thu Jul 25 14:56:43 2024 00:29:25.425 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(236MiB/10049msec) 00:29:25.425 slat (nsec): min=6751, max=41898, avg=18766.46, stdev=7153.58 00:29:25.425 clat (usec): min=8016, max=94674, avg=15946.77, stdev=13271.78 00:29:25.425 lat (usec): min=8031, max=94690, avg=15965.54, stdev=13271.71 00:29:25.425 clat percentiles (usec): 00:29:25.425 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10159], 00:29:25.425 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:29:25.425 | 70.00th=[12518], 80.00th=[13042], 90.00th=[51119], 95.00th=[53216], 00:29:25.425 | 99.00th=[55313], 99.50th=[55837], 99.90th=[93848], 99.95th=[94897], 00:29:25.425 | 99.99th=[94897] 00:29:25.425 bw ( KiB/s): min=16896, max=28672, per=26.67%, avg=24091.65, stdev=3758.58, samples=20 00:29:25.425 iops : min= 132, max= 224, avg=188.20, stdev=29.38, samples=20 00:29:25.425 lat (msec) : 10=16.87%, 20=72.31%, 50=0.16%, 100=10.66% 00:29:25.425 cpu : usr=96.12%, sys=3.20%, ctx=19, majf=0, minf=159 00:29:25.425 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.425 issued rwts: total=1885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:25.425 00:29:25.425 Run status group 0 (all jobs): 00:29:25.425 READ: bw=88.2MiB/s (92.5MB/s), 22.9MiB/s-41.9MiB/s (24.0MB/s-43.9MB/s), io=886MiB (929MB), run=10047-10049msec 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.425 00:29:25.425 real 0m11.146s 00:29:25.425 user 0m35.362s 00:29:25.425 sys 0m1.256s 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.425 14:56:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:25.425 ************************************ 00:29:25.425 END TEST fio_dif_digest 00:29:25.425 ************************************ 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:25.425 14:56:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:25.425 14:56:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:25.425 rmmod nvme_tcp 00:29:25.425 rmmod nvme_fabrics 00:29:25.425 rmmod nvme_keyring 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2500645 ']' 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2500645 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2500645 ']' 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2500645 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2500645 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2500645' 00:29:25.425 killing process with pid 2500645 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2500645 00:29:25.425 14:56:44 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2500645 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:25.425 14:56:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.879 Waiting for block devices as requested 00:29:26.879 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:26.879 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:26.879 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:26.879 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:26.879 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:27.139 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:27.139 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:27.139 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:27.139 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:27.399 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:27.399 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:27.399 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:27.659 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:27.659 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:27.659 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:27.659 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:27.919 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:27.919 14:56:48 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.919 14:56:48 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.919 14:56:48 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.919 14:56:48 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.919 14:56:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.919 14:56:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.919 14:56:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.828 14:56:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:29.828 00:29:29.828 real 1m12.999s 00:29:29.828 user 7m9.367s 00:29:29.828 sys 0m18.241s 00:29:29.828 14:56:50 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.828 14:56:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:29.828 ************************************ 00:29:29.828 END TEST nvmf_dif 00:29:29.828 ************************************ 00:29:30.088 14:56:50 -- common/autotest_common.sh@1142 -- # return 0 00:29:30.088 14:56:50 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:30.088 14:56:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:30.088 14:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.088 14:56:50 -- common/autotest_common.sh@10 -- # set +x 00:29:30.088 ************************************ 00:29:30.088 START TEST nvmf_abort_qd_sizes 00:29:30.088 ************************************ 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:30.088 * Looking for test storage... 00:29:30.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.088 14:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:35.364 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:35.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:35.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:35.365 Found net devices under 0000:86:00.0: cvl_0_0 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:35.365 Found net devices under 0000:86:00.1: cvl_0_1 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:35.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:29:35.365 00:29:35.365 --- 10.0.0.2 ping statistics --- 00:29:35.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.365 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:29:35.365 00:29:35.365 --- 10.0.0.1 ping statistics --- 00:29:35.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.365 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:35.365 14:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:37.905 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:37.905 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:37.906 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:38.475 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2516944 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2516944 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2516944 ']' 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:38.735 14:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:38.735 [2024-07-25 14:56:58.910235] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:29:38.735 [2024-07-25 14:56:58.910283] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.735 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.735 [2024-07-25 14:56:58.970176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.994 [2024-07-25 14:56:59.054711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.994 [2024-07-25 14:56:59.054747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.994 [2024-07-25 14:56:59.054754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.994 [2024-07-25 14:56:59.054760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.994 [2024-07-25 14:56:59.054766] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.994 [2024-07-25 14:56:59.054812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.994 [2024-07-25 14:56:59.054911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.994 [2024-07-25 14:56:59.054929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.994 [2024-07-25 14:56:59.054931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:39.563 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:39.564 14:56:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:39.564 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:39.564 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.564 14:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:39.564 ************************************ 00:29:39.564 START TEST spdk_target_abort 00:29:39.564 ************************************ 00:29:39.564 14:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:39.564 14:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:39.564 14:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:39.564 14:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.564 14:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 spdk_targetn1 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 [2024-07-25 14:57:02.643814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 [2024-07-25 14:57:02.676724] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:42.853 14:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:42.853 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.144 Initializing NVMe Controllers 00:29:46.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:46.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:46.144 Initialization complete. Launching workers. 00:29:46.144 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5670, failed: 0 00:29:46.144 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1398, failed to submit 4272 00:29:46.144 success 925, unsuccess 473, failed 0 00:29:46.144 14:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:46.144 14:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:46.144 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.436 Initializing NVMe Controllers 00:29:49.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:49.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:49.436 Initialization complete. Launching workers. 00:29:49.436 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8764, failed: 0 00:29:49.436 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7537 00:29:49.436 success 331, unsuccess 896, failed 0 00:29:49.436 14:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:49.436 14:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:49.436 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.788 Initializing NVMe Controllers 00:29:52.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:52.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:52.788 Initialization complete. Launching workers. 00:29:52.788 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33468, failed: 0 00:29:52.788 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2824, failed to submit 30644 00:29:52.788 success 699, unsuccess 2125, failed 0 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.788 14:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2516944 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2516944 ']' 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2516944 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516944 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516944' 00:29:53.725 killing process with pid 2516944 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2516944 00:29:53.725 14:57:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2516944 00:29:53.984 00:29:53.984 real 0m14.223s 00:29:53.984 user 0m56.793s 00:29:53.984 sys 0m2.119s 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:53.985 ************************************ 00:29:53.985 END TEST spdk_target_abort 00:29:53.985 ************************************ 00:29:53.985 14:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:53.985 14:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:53.985 14:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:53.985 14:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.985 14:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:53.985 ************************************ 00:29:53.985 START TEST kernel_target_abort 00:29:53.985 ************************************ 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:53.985 14:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:56.524 Waiting for block devices as requested 00:29:56.524 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:56.524 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:56.524 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:56.784 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:56.784 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:56.784 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:57.044 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:57.044 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:57.044 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:57.044 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:57.304 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:57.304 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:57.304 No valid GPT data, bailing 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:57.304 00:29:57.304 Discovery Log Number of Records 2, Generation counter 2 00:29:57.304 =====Discovery Log Entry 0====== 00:29:57.304 trtype: tcp 00:29:57.304 adrfam: ipv4 00:29:57.304 subtype: current discovery subsystem 00:29:57.304 treq: not specified, sq flow control disable supported 00:29:57.304 portid: 1 00:29:57.304 trsvcid: 4420 00:29:57.304 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:57.304 traddr: 10.0.0.1 00:29:57.304 eflags: none 00:29:57.304 sectype: none 00:29:57.304 =====Discovery Log Entry 1====== 00:29:57.304 trtype: tcp 00:29:57.304 adrfam: ipv4 00:29:57.304 subtype: nvme subsystem 00:29:57.304 treq: not specified, sq flow control disable supported 00:29:57.304 portid: 1 00:29:57.304 trsvcid: 4420 00:29:57.304 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:57.304 traddr: 10.0.0.1 00:29:57.304 eflags: none 00:29:57.304 sectype: none 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:57.304 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:57.305 14:57:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.564 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.857 Initializing NVMe Controllers 00:30:00.858 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:00.858 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:00.858 Initialization complete. Launching workers. 00:30:00.858 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26858, failed: 0 00:30:00.858 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26858, failed to submit 0 00:30:00.858 success 0, unsuccess 26858, failed 0 00:30:00.858 14:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:00.858 14:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:00.858 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.149 Initializing NVMe Controllers 00:30:04.149 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:04.149 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:04.149 Initialization complete. Launching workers. 00:30:04.149 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57255, failed: 0 00:30:04.149 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14426, failed to submit 42829 00:30:04.149 success 0, unsuccess 14426, failed 0 00:30:04.149 14:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:04.149 14:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:04.149 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.688 Initializing NVMe Controllers 00:30:06.688 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:06.688 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:06.688 Initialization complete. Launching workers. 00:30:06.688 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56461, failed: 0 00:30:06.688 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14090, failed to submit 42371 00:30:06.688 success 0, unsuccess 14090, failed 0 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:06.688 14:57:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:09.226 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:09.226 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:09.794 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:10.055 00:30:10.055 real 0m16.030s 00:30:10.055 user 0m3.750s 00:30:10.055 sys 0m4.895s 00:30:10.055 14:57:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:10.055 14:57:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.055 ************************************ 00:30:10.055 END TEST kernel_target_abort 00:30:10.055 ************************************ 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.055 rmmod nvme_tcp 00:30:10.055 rmmod nvme_fabrics 00:30:10.055 rmmod nvme_keyring 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2516944 ']' 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2516944 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2516944 ']' 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2516944 00:30:10.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2516944) - No such process 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2516944 is not found' 00:30:10.055 Process with pid 2516944 is not found 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:10.055 14:57:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:12.634 Waiting for block devices as requested 00:30:12.634 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:12.634 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:12.634 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:12.634 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:12.634 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:12.893 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:12.893 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:12.893 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:12.893 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:13.153 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:13.153 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:13.153 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:13.153 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:13.412 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:13.412 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:13.412 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:13.672 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.672 14:57:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.579 14:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:15.579 00:30:15.579 real 0m45.646s 00:30:15.579 user 1m4.191s 00:30:15.579 sys 0m14.599s 00:30:15.579 14:57:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.579 14:57:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:15.579 ************************************ 00:30:15.579 END TEST nvmf_abort_qd_sizes 00:30:15.579 ************************************ 00:30:15.579 14:57:35 -- common/autotest_common.sh@1142 -- # return 0 00:30:15.579 14:57:35 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:15.579 14:57:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:15.579 14:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.579 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:30:15.839 ************************************ 00:30:15.839 START TEST keyring_file 00:30:15.839 ************************************ 00:30:15.839 14:57:35 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:15.839 * Looking for test storage... 00:30:15.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:15.839 14:57:35 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:15.839 14:57:35 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.839 14:57:35 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.839 14:57:35 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.839 14:57:35 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.839 14:57:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.839 14:57:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.839 14:57:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.839 14:57:35 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:15.839 14:57:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.839 14:57:35 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AIhszNn7l7 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AIhszNn7l7 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AIhszNn7l7 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AIhszNn7l7 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q8vcXwuJHv 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:15.839 14:57:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q8vcXwuJHv 00:30:15.839 14:57:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q8vcXwuJHv 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.q8vcXwuJHv 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=2526029 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2526029 00:30:15.839 14:57:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2526029 ']' 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:15.839 14:57:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.099 [2024-07-25 14:57:36.151432] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:30:16.099 [2024-07-25 14:57:36.151485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526029 ] 00:30:16.099 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.099 [2024-07-25 14:57:36.204984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.099 [2024-07-25 14:57:36.285099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.666 14:57:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:16.666 14:57:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:16.666 14:57:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:16.666 14:57:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.666 14:57:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.926 [2024-07-25 14:57:36.962059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.926 null0 00:30:16.926 [2024-07-25 14:57:36.994119] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:16.926 [2024-07-25 14:57:36.994418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:16.926 [2024-07-25 14:57:37.002121] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.926 14:57:37 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.926 [2024-07-25 14:57:37.014153] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:16.926 request: 00:30:16.926 { 00:30:16.926 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.926 "secure_channel": false, 00:30:16.926 "listen_address": { 00:30:16.926 "trtype": "tcp", 00:30:16.926 "traddr": "127.0.0.1", 00:30:16.926 "trsvcid": "4420" 00:30:16.926 }, 00:30:16.926 "method": "nvmf_subsystem_add_listener", 00:30:16.926 "req_id": 1 00:30:16.926 } 00:30:16.926 Got JSON-RPC error response 00:30:16.926 response: 00:30:16.926 { 00:30:16.926 "code": -32602, 00:30:16.926 "message": "Invalid parameters" 00:30:16.926 } 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:16.926 14:57:37 keyring_file -- keyring/file.sh@46 -- # bperfpid=2526105 00:30:16.926 14:57:37 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:16.926 14:57:37 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2526105 /var/tmp/bperf.sock 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2526105 ']' 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.926 14:57:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.926 [2024-07-25 14:57:37.063559] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:30:16.926 [2024-07-25 14:57:37.063601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526105 ] 00:30:16.926 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.926 [2024-07-25 14:57:37.115600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.926 [2024-07-25 14:57:37.189115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.863 14:57:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.863 14:57:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:17.863 14:57:37 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:17.863 14:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:17.863 14:57:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q8vcXwuJHv 00:30:17.863 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q8vcXwuJHv 00:30:18.123 14:57:38 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:18.123 14:57:38 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.123 14:57:38 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.AIhszNn7l7 == \/\t\m\p\/\t\m\p\.\A\I\h\s\z\N\n\7\l\7 ]] 00:30:18.123 14:57:38 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:18.123 14:57:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:18.123 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.384 14:57:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.q8vcXwuJHv == \/\t\m\p\/\t\m\p\.\q\8\v\c\X\w\u\J\H\v ]] 00:30:18.384 14:57:38 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:18.384 14:57:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.384 14:57:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.384 14:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.384 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.384 14:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.643 14:57:38 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:18.643 14:57:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:18.643 14:57:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:18.643 14:57:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.643 14:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.903 [2024-07-25 14:57:39.082587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:18.903 nvme0n1 00:30:18.903 14:57:39 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:18.903 14:57:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.903 14:57:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.903 14:57:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.903 14:57:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.903 14:57:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.162 14:57:39 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:19.163 14:57:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:19.163 14:57:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:19.163 14:57:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.163 14:57:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.163 14:57:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:19.163 14:57:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.422 14:57:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:19.422 14:57:39 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:19.422 Running I/O for 1 seconds... 00:30:20.802 00:30:20.802 Latency(us) 00:30:20.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.802 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:20.802 nvme0n1 : 1.12 1812.15 7.08 0.00 0.00 67742.87 6952.51 166860.35 00:30:20.802 =================================================================================================================== 00:30:20.802 Total : 1812.15 7.08 0.00 0.00 67742.87 6952.51 166860.35 00:30:20.802 0 00:30:20.802 14:57:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:20.802 14:57:40 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:20.802 14:57:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:21.062 14:57:41 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:21.062 14:57:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.062 14:57:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:21.062 14:57:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.062 14:57:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.062 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.322 [2024-07-25 14:57:41.456407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:21.322 [2024-07-25 14:57:41.456982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167c780 (107): Transport endpoint is not connected 00:30:21.322 [2024-07-25 14:57:41.457975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167c780 (9): Bad file descriptor 00:30:21.322 [2024-07-25 14:57:41.458974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:21.322 [2024-07-25 14:57:41.458991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:21.322 [2024-07-25 14:57:41.458997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:21.322 request: 00:30:21.322 { 00:30:21.322 "name": "nvme0", 00:30:21.322 "trtype": "tcp", 00:30:21.322 "traddr": "127.0.0.1", 00:30:21.322 "adrfam": "ipv4", 00:30:21.322 "trsvcid": "4420", 00:30:21.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.322 "prchk_reftag": false, 00:30:21.322 "prchk_guard": false, 00:30:21.322 "hdgst": false, 00:30:21.322 "ddgst": false, 00:30:21.322 "psk": "key1", 00:30:21.322 "method": "bdev_nvme_attach_controller", 00:30:21.322 "req_id": 1 00:30:21.322 } 00:30:21.322 Got JSON-RPC error response 00:30:21.322 response: 00:30:21.322 { 00:30:21.322 "code": -5, 00:30:21.322 "message": "Input/output error" 00:30:21.322 } 00:30:21.322 14:57:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:21.322 14:57:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:21.322 14:57:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:21.322 14:57:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:21.322 14:57:41 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:21.322 14:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:21.322 14:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.322 14:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.322 14:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:21.322 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.581 14:57:41 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:21.581 14:57:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.581 14:57:41 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:21.581 14:57:41 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:21.581 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:21.841 14:57:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:21.841 14:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:22.100 14:57:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:22.100 14:57:42 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:22.100 14:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.100 14:57:42 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:22.100 14:57:42 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.AIhszNn7l7 00:30:22.100 14:57:42 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.100 14:57:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.100 14:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.359 [2024-07-25 14:57:42.467810] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AIhszNn7l7': 0100660 00:30:22.359 [2024-07-25 14:57:42.467835] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:22.359 request: 00:30:22.359 { 00:30:22.359 "name": "key0", 00:30:22.359 "path": "/tmp/tmp.AIhszNn7l7", 00:30:22.359 "method": "keyring_file_add_key", 00:30:22.359 "req_id": 1 00:30:22.359 } 00:30:22.359 Got JSON-RPC error response 00:30:22.359 response: 00:30:22.359 { 00:30:22.359 "code": -1, 00:30:22.359 "message": "Operation not permitted" 00:30:22.359 } 00:30:22.359 14:57:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:22.359 14:57:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:22.359 14:57:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:22.359 14:57:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:22.359 14:57:42 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.AIhszNn7l7 00:30:22.359 14:57:42 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.359 14:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AIhszNn7l7 00:30:22.618 14:57:42 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.AIhszNn7l7 00:30:22.618 14:57:42 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.618 14:57:42 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:22.618 14:57:42 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.618 14:57:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:22.618 14:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:22.877 [2024-07-25 14:57:43.001223] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AIhszNn7l7': No such file or directory 00:30:22.877 [2024-07-25 14:57:43.001245] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:22.877 [2024-07-25 14:57:43.001266] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:22.877 [2024-07-25 14:57:43.001272] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:22.877 [2024-07-25 14:57:43.001278] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:22.877 request: 00:30:22.877 { 00:30:22.877 "name": "nvme0", 00:30:22.877 "trtype": "tcp", 00:30:22.877 "traddr": "127.0.0.1", 00:30:22.877 "adrfam": "ipv4", 00:30:22.877 "trsvcid": "4420", 00:30:22.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.877 "prchk_reftag": false, 00:30:22.877 "prchk_guard": false, 00:30:22.877 "hdgst": false, 00:30:22.877 "ddgst": false, 00:30:22.877 "psk": "key0", 00:30:22.877 "method": "bdev_nvme_attach_controller", 00:30:22.877 "req_id": 1 00:30:22.877 } 00:30:22.877 Got JSON-RPC error response 00:30:22.877 response: 00:30:22.877 { 00:30:22.877 "code": -19, 00:30:22.877 "message": "No such device" 00:30:22.877 } 00:30:22.877 14:57:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:22.877 14:57:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:22.877 14:57:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:22.877 14:57:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:22.877 14:57:43 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:22.877 14:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:23.136 14:57:43 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yt7cR6it6h 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:23.136 14:57:43 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yt7cR6it6h 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yt7cR6it6h 00:30:23.136 14:57:43 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yt7cR6it6h 00:30:23.136 14:57:43 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yt7cR6it6h 00:30:23.136 14:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yt7cR6it6h 00:30:23.395 14:57:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.395 nvme0n1 00:30:23.395 14:57:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:23.395 14:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.655 14:57:43 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:23.655 14:57:43 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:23.655 14:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:23.914 14:57:44 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:23.914 14:57:44 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:23.914 14:57:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.914 14:57:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:23.914 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.914 14:57:44 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:23.914 14:57:44 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:24.173 14:57:44 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:24.173 14:57:44 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:24.173 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:24.432 14:57:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:24.432 14:57:44 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:24.432 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.692 14:57:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:24.692 14:57:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yt7cR6it6h 00:30:24.692 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yt7cR6it6h 00:30:24.692 14:57:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q8vcXwuJHv 00:30:24.692 14:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q8vcXwuJHv 00:30:24.952 14:57:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:24.952 14:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:25.212 nvme0n1 00:30:25.212 14:57:45 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:25.212 14:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:25.472 14:57:45 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:25.472 "subsystems": [ 00:30:25.472 { 00:30:25.472 "subsystem": "keyring", 00:30:25.472 "config": [ 00:30:25.472 { 00:30:25.472 "method": "keyring_file_add_key", 00:30:25.472 "params": { 00:30:25.472 "name": "key0", 00:30:25.472 "path": "/tmp/tmp.yt7cR6it6h" 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "keyring_file_add_key", 00:30:25.472 "params": { 00:30:25.472 "name": "key1", 00:30:25.472 "path": "/tmp/tmp.q8vcXwuJHv" 00:30:25.472 } 00:30:25.472 } 00:30:25.472 ] 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "subsystem": "iobuf", 00:30:25.472 "config": [ 00:30:25.472 { 00:30:25.472 "method": "iobuf_set_options", 00:30:25.472 "params": { 00:30:25.472 "small_pool_count": 8192, 00:30:25.472 "large_pool_count": 1024, 00:30:25.472 "small_bufsize": 8192, 00:30:25.472 "large_bufsize": 135168 00:30:25.472 } 00:30:25.472 } 00:30:25.472 ] 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "subsystem": "sock", 00:30:25.472 "config": [ 00:30:25.472 { 00:30:25.472 "method": "sock_set_default_impl", 00:30:25.472 "params": { 00:30:25.472 "impl_name": "posix" 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "sock_impl_set_options", 00:30:25.472 "params": { 00:30:25.472 "impl_name": "ssl", 00:30:25.472 "recv_buf_size": 4096, 00:30:25.472 "send_buf_size": 4096, 00:30:25.472 "enable_recv_pipe": true, 00:30:25.472 "enable_quickack": false, 00:30:25.472 "enable_placement_id": 0, 00:30:25.472 "enable_zerocopy_send_server": true, 00:30:25.472 "enable_zerocopy_send_client": false, 00:30:25.472 "zerocopy_threshold": 0, 00:30:25.472 "tls_version": 0, 00:30:25.472 "enable_ktls": false 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "sock_impl_set_options", 00:30:25.472 "params": { 00:30:25.472 "impl_name": "posix", 00:30:25.472 "recv_buf_size": 2097152, 00:30:25.472 "send_buf_size": 2097152, 00:30:25.472 "enable_recv_pipe": true, 00:30:25.472 "enable_quickack": false, 00:30:25.472 "enable_placement_id": 0, 00:30:25.472 "enable_zerocopy_send_server": true, 00:30:25.472 "enable_zerocopy_send_client": false, 00:30:25.472 "zerocopy_threshold": 0, 00:30:25.472 "tls_version": 0, 00:30:25.472 "enable_ktls": false 00:30:25.472 } 00:30:25.472 } 00:30:25.472 ] 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "subsystem": "vmd", 00:30:25.472 "config": [] 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "subsystem": "accel", 00:30:25.472 "config": [ 00:30:25.472 { 00:30:25.472 "method": "accel_set_options", 00:30:25.472 "params": { 00:30:25.472 "small_cache_size": 128, 00:30:25.472 "large_cache_size": 16, 00:30:25.472 "task_count": 2048, 00:30:25.472 "sequence_count": 2048, 00:30:25.472 "buf_count": 2048 00:30:25.472 } 00:30:25.472 } 00:30:25.472 ] 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "subsystem": "bdev", 00:30:25.472 "config": [ 00:30:25.472 { 00:30:25.472 "method": "bdev_set_options", 00:30:25.472 "params": { 00:30:25.472 "bdev_io_pool_size": 65535, 00:30:25.472 "bdev_io_cache_size": 256, 00:30:25.472 "bdev_auto_examine": true, 00:30:25.472 "iobuf_small_cache_size": 128, 00:30:25.472 "iobuf_large_cache_size": 16 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "bdev_raid_set_options", 00:30:25.472 "params": { 00:30:25.472 "process_window_size_kb": 1024 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "bdev_iscsi_set_options", 00:30:25.472 "params": { 00:30:25.472 "timeout_sec": 30 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "bdev_nvme_set_options", 00:30:25.472 "params": { 00:30:25.472 "action_on_timeout": "none", 00:30:25.472 "timeout_us": 0, 00:30:25.472 "timeout_admin_us": 0, 00:30:25.472 "keep_alive_timeout_ms": 10000, 00:30:25.472 "arbitration_burst": 0, 00:30:25.472 "low_priority_weight": 0, 00:30:25.472 "medium_priority_weight": 0, 00:30:25.472 "high_priority_weight": 0, 00:30:25.472 "nvme_adminq_poll_period_us": 10000, 00:30:25.472 "nvme_ioq_poll_period_us": 0, 00:30:25.472 "io_queue_requests": 512, 00:30:25.472 "delay_cmd_submit": true, 00:30:25.472 "transport_retry_count": 4, 00:30:25.472 "bdev_retry_count": 3, 00:30:25.472 "transport_ack_timeout": 0, 00:30:25.472 "ctrlr_loss_timeout_sec": 0, 00:30:25.472 "reconnect_delay_sec": 0, 00:30:25.472 "fast_io_fail_timeout_sec": 0, 00:30:25.472 "disable_auto_failback": false, 00:30:25.472 "generate_uuids": false, 00:30:25.472 "transport_tos": 0, 00:30:25.472 "nvme_error_stat": false, 00:30:25.472 "rdma_srq_size": 0, 00:30:25.472 "io_path_stat": false, 00:30:25.472 "allow_accel_sequence": false, 00:30:25.472 "rdma_max_cq_size": 0, 00:30:25.472 "rdma_cm_event_timeout_ms": 0, 00:30:25.472 "dhchap_digests": [ 00:30:25.472 "sha256", 00:30:25.472 "sha384", 00:30:25.472 "sha512" 00:30:25.472 ], 00:30:25.472 "dhchap_dhgroups": [ 00:30:25.472 "null", 00:30:25.472 "ffdhe2048", 00:30:25.472 "ffdhe3072", 00:30:25.472 "ffdhe4096", 00:30:25.472 "ffdhe6144", 00:30:25.472 "ffdhe8192" 00:30:25.472 ] 00:30:25.472 } 00:30:25.472 }, 00:30:25.472 { 00:30:25.472 "method": "bdev_nvme_attach_controller", 00:30:25.472 "params": { 00:30:25.472 "name": "nvme0", 00:30:25.472 "trtype": "TCP", 00:30:25.472 "adrfam": "IPv4", 00:30:25.472 "traddr": "127.0.0.1", 00:30:25.472 "trsvcid": "4420", 00:30:25.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.472 "prchk_reftag": false, 00:30:25.472 "prchk_guard": false, 00:30:25.472 "ctrlr_loss_timeout_sec": 0, 00:30:25.472 "reconnect_delay_sec": 0, 00:30:25.472 "fast_io_fail_timeout_sec": 0, 00:30:25.472 "psk": "key0", 00:30:25.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.472 "hdgst": false, 00:30:25.472 "ddgst": false 00:30:25.472 } 00:30:25.472 }, 00:30:25.473 { 00:30:25.473 "method": "bdev_nvme_set_hotplug", 00:30:25.473 "params": { 00:30:25.473 "period_us": 100000, 00:30:25.473 "enable": false 00:30:25.473 } 00:30:25.473 }, 00:30:25.473 { 00:30:25.473 "method": "bdev_wait_for_examine" 00:30:25.473 } 00:30:25.473 ] 00:30:25.473 }, 00:30:25.473 { 00:30:25.473 "subsystem": "nbd", 00:30:25.473 "config": [] 00:30:25.473 } 00:30:25.473 ] 00:30:25.473 }' 00:30:25.473 14:57:45 keyring_file -- keyring/file.sh@114 -- # killprocess 2526105 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2526105 ']' 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2526105 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526105 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526105' 00:30:25.473 killing process with pid 2526105 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@967 -- # kill 2526105 00:30:25.473 Received shutdown signal, test time was about 1.000000 seconds 00:30:25.473 00:30:25.473 Latency(us) 00:30:25.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.473 =================================================================================================================== 00:30:25.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.473 14:57:45 keyring_file -- common/autotest_common.sh@972 -- # wait 2526105 00:30:25.733 14:57:45 keyring_file -- keyring/file.sh@117 -- # bperfpid=2527619 00:30:25.733 14:57:45 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2527619 /var/tmp/bperf.sock 00:30:25.733 14:57:45 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2527619 ']' 00:30:25.733 14:57:45 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.733 14:57:45 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:25.733 14:57:45 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:25.733 14:57:45 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.733 14:57:45 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:25.733 "subsystems": [ 00:30:25.733 { 00:30:25.733 "subsystem": "keyring", 00:30:25.733 "config": [ 00:30:25.733 { 00:30:25.733 "method": "keyring_file_add_key", 00:30:25.733 "params": { 00:30:25.733 "name": "key0", 00:30:25.733 "path": "/tmp/tmp.yt7cR6it6h" 00:30:25.733 } 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "method": "keyring_file_add_key", 00:30:25.733 "params": { 00:30:25.733 "name": "key1", 00:30:25.733 "path": "/tmp/tmp.q8vcXwuJHv" 00:30:25.733 } 00:30:25.733 } 00:30:25.733 ] 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "subsystem": "iobuf", 00:30:25.733 "config": [ 00:30:25.733 { 00:30:25.733 "method": "iobuf_set_options", 00:30:25.733 "params": { 00:30:25.733 "small_pool_count": 8192, 00:30:25.733 "large_pool_count": 1024, 00:30:25.733 "small_bufsize": 8192, 00:30:25.733 "large_bufsize": 135168 00:30:25.733 } 00:30:25.733 } 00:30:25.733 ] 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "subsystem": "sock", 00:30:25.733 "config": [ 00:30:25.733 { 00:30:25.733 "method": "sock_set_default_impl", 00:30:25.733 "params": { 00:30:25.733 "impl_name": "posix" 00:30:25.733 } 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "method": "sock_impl_set_options", 00:30:25.733 "params": { 00:30:25.733 "impl_name": "ssl", 00:30:25.733 "recv_buf_size": 4096, 00:30:25.733 "send_buf_size": 4096, 00:30:25.733 "enable_recv_pipe": true, 00:30:25.733 "enable_quickack": false, 00:30:25.733 "enable_placement_id": 0, 00:30:25.733 "enable_zerocopy_send_server": true, 00:30:25.733 "enable_zerocopy_send_client": false, 00:30:25.733 "zerocopy_threshold": 0, 00:30:25.733 "tls_version": 0, 00:30:25.733 "enable_ktls": false 00:30:25.733 } 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "method": "sock_impl_set_options", 00:30:25.733 "params": { 00:30:25.733 "impl_name": "posix", 00:30:25.733 "recv_buf_size": 2097152, 00:30:25.733 "send_buf_size": 2097152, 00:30:25.733 "enable_recv_pipe": true, 00:30:25.733 "enable_quickack": false, 00:30:25.733 "enable_placement_id": 0, 00:30:25.733 "enable_zerocopy_send_server": true, 00:30:25.733 "enable_zerocopy_send_client": false, 00:30:25.733 "zerocopy_threshold": 0, 00:30:25.733 "tls_version": 0, 00:30:25.733 "enable_ktls": false 00:30:25.733 } 00:30:25.733 } 00:30:25.733 ] 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "subsystem": "vmd", 00:30:25.733 "config": [] 00:30:25.733 }, 00:30:25.733 { 00:30:25.733 "subsystem": "accel", 00:30:25.733 "config": [ 00:30:25.733 { 00:30:25.733 "method": "accel_set_options", 00:30:25.733 "params": { 00:30:25.733 "small_cache_size": 128, 00:30:25.733 "large_cache_size": 16, 00:30:25.733 "task_count": 2048, 00:30:25.733 "sequence_count": 2048, 00:30:25.733 "buf_count": 2048 00:30:25.733 } 00:30:25.734 } 00:30:25.734 ] 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "subsystem": "bdev", 00:30:25.734 "config": [ 00:30:25.734 { 00:30:25.734 "method": "bdev_set_options", 00:30:25.734 "params": { 00:30:25.734 "bdev_io_pool_size": 65535, 00:30:25.734 "bdev_io_cache_size": 256, 00:30:25.734 "bdev_auto_examine": true, 00:30:25.734 "iobuf_small_cache_size": 128, 00:30:25.734 "iobuf_large_cache_size": 16 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_raid_set_options", 00:30:25.734 "params": { 00:30:25.734 "process_window_size_kb": 1024 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_iscsi_set_options", 00:30:25.734 "params": { 00:30:25.734 "timeout_sec": 30 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_nvme_set_options", 00:30:25.734 "params": { 00:30:25.734 "action_on_timeout": "none", 00:30:25.734 "timeout_us": 0, 00:30:25.734 "timeout_admin_us": 0, 00:30:25.734 "keep_alive_timeout_ms": 10000, 00:30:25.734 "arbitration_burst": 0, 00:30:25.734 "low_priority_weight": 0, 00:30:25.734 "medium_priority_weight": 0, 00:30:25.734 "high_priority_weight": 0, 00:30:25.734 "nvme_adminq_poll_period_us": 10000, 00:30:25.734 "nvme_ioq_poll_period_us": 0, 00:30:25.734 "io_queue_requests": 512, 00:30:25.734 "delay_cmd_submit": true, 00:30:25.734 "transport_retry_count": 4, 00:30:25.734 "bdev_retry_count": 3, 00:30:25.734 "transport_ack_timeout": 0, 00:30:25.734 "ctrlr_loss_timeout_sec": 0, 00:30:25.734 "reconnect_delay_sec": 0, 00:30:25.734 "fast_io_fail_timeout_sec": 0, 00:30:25.734 "disable_auto_failback": false, 00:30:25.734 "generate_uuids": false, 00:30:25.734 "transport_tos": 0, 00:30:25.734 "nvme_error_stat": false, 00:30:25.734 "rdma_srq_size": 0, 00:30:25.734 "io_path_stat": false, 00:30:25.734 "allow_accel_sequence": false, 00:30:25.734 "rdma_max_cq_size": 0, 00:30:25.734 "rdma_cm_event_timeout_ms": 0, 00:30:25.734 "dhchap_digests": [ 00:30:25.734 "sha256", 00:30:25.734 "sha384", 00:30:25.734 "sha512" 00:30:25.734 ], 00:30:25.734 "dhchap_dhgroups": [ 00:30:25.734 "null", 00:30:25.734 "ffdhe2048", 00:30:25.734 "ffdhe3072", 00:30:25.734 "ffdhe4096", 00:30:25.734 "ffdhe6144", 00:30:25.734 "ffdhe8192" 00:30:25.734 ] 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_nvme_attach_controller", 00:30:25.734 "params": { 00:30:25.734 "name": "nvme0", 00:30:25.734 "trtype": "TCP", 00:30:25.734 "adrfam": "IPv4", 00:30:25.734 "traddr": "127.0.0.1", 00:30:25.734 "trsvcid": "4420", 00:30:25.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.734 "prchk_reftag": false, 00:30:25.734 "prchk_guard": false, 00:30:25.734 "ctrlr_loss_timeout_sec": 0, 00:30:25.734 "reconnect_delay_sec": 0, 00:30:25.734 "fast_io_fail_timeout_sec": 0, 00:30:25.734 "psk": "key0", 00:30:25.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.734 "hdgst": false, 00:30:25.734 "ddgst": false 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_nvme_set_hotplug", 00:30:25.734 "params": { 00:30:25.734 "period_us": 100000, 00:30:25.734 "enable": false 00:30:25.734 } 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "method": "bdev_wait_for_examine" 00:30:25.734 } 00:30:25.734 ] 00:30:25.734 }, 00:30:25.734 { 00:30:25.734 "subsystem": "nbd", 00:30:25.734 "config": [] 00:30:25.734 } 00:30:25.734 ] 00:30:25.734 }' 00:30:25.734 14:57:45 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:25.734 14:57:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:25.734 [2024-07-25 14:57:45.837350] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:30:25.734 [2024-07-25 14:57:45.837399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527619 ] 00:30:25.734 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.734 [2024-07-25 14:57:45.891206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.734 [2024-07-25 14:57:45.960172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.994 [2024-07-25 14:57:46.119974] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.605 14:57:46 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:26.605 14:57:46 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:26.605 14:57:46 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:26.605 14:57:46 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.605 14:57:46 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:26.605 14:57:46 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:26.605 14:57:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.887 14:57:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:26.887 14:57:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:26.887 14:57:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:26.887 14:57:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:26.887 14:57:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:26.887 14:57:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.887 14:57:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:26.887 14:57:47 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:26.887 14:57:47 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:26.887 14:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:26.887 14:57:47 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:27.146 14:57:47 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:27.146 14:57:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:27.146 14:57:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yt7cR6it6h /tmp/tmp.q8vcXwuJHv 00:30:27.147 14:57:47 keyring_file -- keyring/file.sh@20 -- # killprocess 2527619 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2527619 ']' 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2527619 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2527619 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2527619' 00:30:27.147 killing process with pid 2527619 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@967 -- # kill 2527619 00:30:27.147 Received shutdown signal, test time was about 1.000000 seconds 00:30:27.147 00:30:27.147 Latency(us) 00:30:27.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.147 =================================================================================================================== 00:30:27.147 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:27.147 14:57:47 keyring_file -- common/autotest_common.sh@972 -- # wait 2527619 00:30:27.406 14:57:47 keyring_file -- keyring/file.sh@21 -- # killprocess 2526029 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2526029 ']' 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2526029 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526029 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526029' 00:30:27.406 killing process with pid 2526029 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@967 -- # kill 2526029 00:30:27.406 [2024-07-25 14:57:47.615369] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:27.406 14:57:47 keyring_file -- common/autotest_common.sh@972 -- # wait 2526029 00:30:27.665 00:30:27.665 real 0m12.044s 00:30:27.665 user 0m28.626s 00:30:27.665 sys 0m2.458s 00:30:27.665 14:57:47 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.665 14:57:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:27.665 ************************************ 00:30:27.665 END TEST keyring_file 00:30:27.665 ************************************ 00:30:27.926 14:57:47 -- common/autotest_common.sh@1142 -- # return 0 00:30:27.926 14:57:47 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:27.926 14:57:47 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:27.926 14:57:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:27.926 14:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.926 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:30:27.926 ************************************ 00:30:27.926 START TEST keyring_linux 00:30:27.926 ************************************ 00:30:27.926 14:57:47 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:27.926 * Looking for test storage... 00:30:27.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.926 14:57:48 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.926 14:57:48 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.926 14:57:48 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.926 14:57:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.926 14:57:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.926 14:57:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.926 14:57:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:27.926 14:57:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.926 14:57:48 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:27.926 14:57:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:27.926 14:57:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:27.927 /tmp/:spdk-test:key0 00:30:27.927 14:57:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:27.927 14:57:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:27.927 14:57:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:27.927 /tmp/:spdk-test:key1 00:30:27.927 14:57:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2528168 00:30:27.927 14:57:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2528168 00:30:27.927 14:57:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2528168 ']' 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.927 14:57:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:28.187 [2024-07-25 14:57:48.234763] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:30:28.187 [2024-07-25 14:57:48.234814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528168 ] 00:30:28.187 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.187 [2024-07-25 14:57:48.286407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.187 [2024-07-25 14:57:48.366442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.755 14:57:49 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:28.755 14:57:49 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:28.755 14:57:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:28.755 14:57:49 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.755 14:57:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:28.755 [2024-07-25 14:57:49.036389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.015 null0 00:30:29.015 [2024-07-25 14:57:49.068449] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:29.015 [2024-07-25 14:57:49.068766] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:29.015 14:57:49 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.016 14:57:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:29.016 71388964 00:30:29.016 14:57:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:29.016 158350849 00:30:29.016 14:57:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2528306 00:30:29.016 14:57:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2528306 /var/tmp/bperf.sock 00:30:29.016 14:57:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2528306 ']' 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.016 14:57:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:29.016 [2024-07-25 14:57:49.137607] Starting SPDK v24.09-pre git sha1 e7b600835 / DPDK 24.03.0 initialization... 00:30:29.016 [2024-07-25 14:57:49.137651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528306 ] 00:30:29.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.016 [2024-07-25 14:57:49.190825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.016 [2024-07-25 14:57:49.270692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.954 14:57:49 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:29.954 14:57:49 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:29.954 14:57:49 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:29.954 14:57:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:29.954 14:57:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:29.954 14:57:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:30.214 14:57:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:30.214 14:57:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:30.473 [2024-07-25 14:57:50.552194] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:30.473 nvme0n1 00:30:30.473 14:57:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:30.473 14:57:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:30.473 14:57:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:30.473 14:57:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:30.473 14:57:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:30.473 14:57:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:30.732 14:57:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:30.732 14:57:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:30.732 14:57:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:30.732 14:57:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:30.732 14:57:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:30.732 14:57:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:30.732 14:57:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@25 -- # sn=71388964 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 71388964 == \7\1\3\8\8\9\6\4 ]] 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 71388964 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:30.732 14:57:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.992 Running I/O for 1 seconds... 00:30:31.932 00:30:31.932 Latency(us) 00:30:31.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:31.932 nvme0n1 : 1.04 2603.46 10.17 0.00 0.00 48355.28 12537.32 63370.46 00:30:31.932 =================================================================================================================== 00:30:31.932 Total : 2603.46 10.17 0.00 0.00 48355.28 12537.32 63370.46 00:30:31.932 0 00:30:31.932 14:57:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:31.932 14:57:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:32.192 14:57:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:32.192 14:57:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:32.192 14:57:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:32.192 14:57:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:32.192 14:57:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:32.192 14:57:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.452 14:57:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.452 [2024-07-25 14:57:52.678123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:32.452 [2024-07-25 14:57:52.678231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2769500 (107): Transport endpoint is not connected 00:30:32.452 [2024-07-25 14:57:52.679227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2769500 (9): Bad file descriptor 00:30:32.452 [2024-07-25 14:57:52.680227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.452 [2024-07-25 14:57:52.680238] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:32.452 [2024-07-25 14:57:52.680244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.452 request: 00:30:32.452 { 00:30:32.452 "name": "nvme0", 00:30:32.452 "trtype": "tcp", 00:30:32.452 "traddr": "127.0.0.1", 00:30:32.452 "adrfam": "ipv4", 00:30:32.452 "trsvcid": "4420", 00:30:32.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.452 "prchk_reftag": false, 00:30:32.452 "prchk_guard": false, 00:30:32.452 "hdgst": false, 00:30:32.452 "ddgst": false, 00:30:32.452 "psk": ":spdk-test:key1", 00:30:32.452 "method": "bdev_nvme_attach_controller", 00:30:32.452 "req_id": 1 00:30:32.452 } 00:30:32.452 Got JSON-RPC error response 00:30:32.452 response: 00:30:32.452 { 00:30:32.452 "code": -5, 00:30:32.452 "message": "Input/output error" 00:30:32.452 } 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@33 -- # sn=71388964 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 71388964 00:30:32.452 1 links removed 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@33 -- # sn=158350849 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 158350849 00:30:32.452 1 links removed 00:30:32.452 14:57:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2528306 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2528306 ']' 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2528306 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.452 14:57:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2528306 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2528306' 00:30:32.712 killing process with pid 2528306 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 2528306 00:30:32.712 Received shutdown signal, test time was about 1.000000 seconds 00:30:32.712 00:30:32.712 Latency(us) 00:30:32.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.712 =================================================================================================================== 00:30:32.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 2528306 00:30:32.712 14:57:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2528168 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2528168 ']' 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2528168 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2528168 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2528168' 00:30:32.712 killing process with pid 2528168 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 2528168 00:30:32.712 14:57:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 2528168 00:30:33.279 00:30:33.279 real 0m5.311s 00:30:33.279 user 0m9.343s 00:30:33.279 sys 0m1.213s 00:30:33.279 14:57:53 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:33.279 14:57:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:33.279 ************************************ 00:30:33.279 END TEST keyring_linux 00:30:33.279 ************************************ 00:30:33.279 14:57:53 -- common/autotest_common.sh@1142 -- # return 0 00:30:33.279 14:57:53 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:33.279 14:57:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:33.279 14:57:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:33.279 14:57:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:33.279 14:57:53 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:33.279 14:57:53 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:33.279 14:57:53 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:33.279 14:57:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:33.279 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:30:33.279 14:57:53 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:33.279 14:57:53 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:33.279 14:57:53 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:33.279 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:30:37.478 INFO: APP EXITING 00:30:37.479 INFO: killing all VMs 00:30:37.479 INFO: killing vhost app 00:30:37.479 INFO: EXIT DONE 00:30:40.017 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:40.017 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:40.017 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:40.276 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:42.815 Cleaning 00:30:42.815 Removing: /var/run/dpdk/spdk0/config 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:42.815 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:42.815 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:42.815 Removing: /var/run/dpdk/spdk1/config 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:42.815 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:42.815 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:42.815 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:42.815 Removing: /var/run/dpdk/spdk2/config 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:42.815 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:42.815 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:42.815 Removing: /var/run/dpdk/spdk3/config 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:42.815 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:42.815 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:42.815 Removing: /var/run/dpdk/spdk4/config 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:42.815 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:42.815 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:42.815 Removing: /dev/shm/bdev_svc_trace.1 00:30:42.815 Removing: /dev/shm/nvmf_trace.0 00:30:42.815 Removing: /dev/shm/spdk_tgt_trace.pid2143455 00:30:42.815 Removing: /var/run/dpdk/spdk0 00:30:42.815 Removing: /var/run/dpdk/spdk1 00:30:42.815 Removing: /var/run/dpdk/spdk2 00:30:42.815 Removing: /var/run/dpdk/spdk3 00:30:42.815 Removing: /var/run/dpdk/spdk4 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2141279 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2142388 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2143455 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2144092 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2145033 00:30:42.815 Removing: /var/run/dpdk/spdk_pid2145275 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2146249 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2146396 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2146607 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2148116 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2149390 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2149752 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2150157 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2150469 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2150762 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2151015 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2151261 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2151535 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2152286 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2155395 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2155656 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2156084 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2156448 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2156818 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2157048 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2157450 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2157551 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2157813 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2158042 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2158298 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2158318 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2158870 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2159119 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2159411 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2159677 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2159701 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2159889 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2160138 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2160401 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2160644 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2160897 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2161162 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2161416 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2161661 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2161935 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2162183 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2162454 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2162705 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2162956 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2163215 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2163461 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2163718 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2163974 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2164228 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2164483 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2164733 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2164981 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2165051 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2165365 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2169211 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2213121 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2217382 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2227353 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2232744 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2237194 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2237721 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2243890 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2249892 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2249894 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2250923 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2251790 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2252945 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2253582 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2253637 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2253865 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2253881 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2253917 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2254798 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2255710 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2256628 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2257100 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2257186 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2257516 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2258646 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2259772 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2267904 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2268358 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2272610 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2278264 00:30:43.075 Removing: /var/run/dpdk/spdk_pid2280855 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2291042 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2300439 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2302264 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2303180 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2319776 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2323671 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2348464 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2352978 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2354592 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2356630 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2356827 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2356982 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2357140 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2357858 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2359686 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2360678 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2361181 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2363283 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2364003 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2364736 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2368773 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2378703 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2383038 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2389107 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2390549 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2391881 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2396215 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2400407 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2407767 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2407769 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2412475 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2412705 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2412909 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2413187 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2413342 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2417649 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2418222 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2422545 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2425306 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2431315 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2436704 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2445313 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2452439 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2452482 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2470090 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2470782 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2471413 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2471964 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2472930 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2473626 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2474258 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2474933 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2479580 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2479813 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2485745 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2485932 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2488152 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2495881 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2495887 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2500911 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2502872 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2504851 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2505898 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2507874 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2508945 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2517780 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2518242 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2518949 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2521354 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2521844 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2522396 00:30:43.335 Removing: /var/run/dpdk/spdk_pid2526029 00:30:43.595 Removing: /var/run/dpdk/spdk_pid2526105 00:30:43.595 Removing: /var/run/dpdk/spdk_pid2527619 00:30:43.595 Removing: /var/run/dpdk/spdk_pid2528168 00:30:43.595 Removing: /var/run/dpdk/spdk_pid2528306 00:30:43.595 Clean 00:30:43.595 14:58:03 -- common/autotest_common.sh@1451 -- # return 0 00:30:43.595 14:58:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:43.595 14:58:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.595 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:30:43.595 14:58:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:43.595 14:58:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.595 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:30:43.595 14:58:03 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:43.595 14:58:03 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:43.595 14:58:03 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:43.595 14:58:03 -- spdk/autotest.sh@391 -- # hash lcov 00:30:43.595 14:58:03 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:43.595 14:58:03 -- spdk/autotest.sh@393 -- # hostname 00:30:43.595 14:58:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:43.856 geninfo: WARNING: invalid characters removed from testname! 00:31:05.844 14:58:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:06.103 14:58:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.010 14:58:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:09.917 14:58:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:11.825 14:58:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:13.732 14:58:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:15.112 14:58:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:15.372 14:58:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.372 14:58:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:15.372 14:58:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.372 14:58:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.372 14:58:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.372 14:58:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.372 14:58:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.372 14:58:35 -- paths/export.sh@5 -- $ export PATH 00:31:15.372 14:58:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.372 14:58:35 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:15.372 14:58:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:15.372 14:58:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721912315.XXXXXX 00:31:15.372 14:58:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721912315.3LRSex 00:31:15.372 14:58:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:15.372 14:58:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:15.372 14:58:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:15.372 14:58:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:15.372 14:58:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:15.372 14:58:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:15.372 14:58:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:15.372 14:58:35 -- common/autotest_common.sh@10 -- $ set +x 00:31:15.372 14:58:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:15.372 14:58:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:15.372 14:58:35 -- pm/common@17 -- $ local monitor 00:31:15.372 14:58:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.372 14:58:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.373 14:58:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.373 14:58:35 -- pm/common@21 -- $ date +%s 00:31:15.373 14:58:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.373 14:58:35 -- pm/common@25 -- $ sleep 1 00:31:15.373 14:58:35 -- pm/common@21 -- $ date +%s 00:31:15.373 14:58:35 -- pm/common@21 -- $ date +%s 00:31:15.373 14:58:35 -- pm/common@21 -- $ date +%s 00:31:15.373 14:58:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721912315 00:31:15.373 14:58:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721912315 00:31:15.373 14:58:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721912315 00:31:15.373 14:58:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721912315 00:31:15.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721912315_collect-vmstat.pm.log 00:31:15.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721912315_collect-cpu-load.pm.log 00:31:15.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721912315_collect-cpu-temp.pm.log 00:31:15.373 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721912315_collect-bmc-pm.bmc.pm.log 00:31:16.315 14:58:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:16.315 14:58:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:16.315 14:58:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.315 14:58:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:16.315 14:58:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:16.315 14:58:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:16.315 14:58:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:16.315 14:58:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:16.315 14:58:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:16.315 14:58:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:16.315 14:58:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:16.315 14:58:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:16.315 14:58:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:16.315 14:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:16.315 14:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:16.315 14:58:36 -- pm/common@44 -- $ pid=2538323 00:31:16.315 14:58:36 -- pm/common@50 -- $ kill -TERM 2538323 00:31:16.315 14:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:16.315 14:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:16.315 14:58:36 -- pm/common@44 -- $ pid=2538324 00:31:16.315 14:58:36 -- pm/common@50 -- $ kill -TERM 2538324 00:31:16.315 14:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:16.315 14:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:16.315 14:58:36 -- pm/common@44 -- $ pid=2538326 00:31:16.315 14:58:36 -- pm/common@50 -- $ kill -TERM 2538326 00:31:16.315 14:58:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:16.315 14:58:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:16.315 14:58:36 -- pm/common@44 -- $ pid=2538347 00:31:16.315 14:58:36 -- pm/common@50 -- $ sudo -E kill -TERM 2538347 00:31:16.315 + [[ -n 2037258 ]] 00:31:16.315 + sudo kill 2037258 00:31:16.325 [Pipeline] } 00:31:16.344 [Pipeline] // stage 00:31:16.350 [Pipeline] } 00:31:16.367 [Pipeline] // timeout 00:31:16.372 [Pipeline] } 00:31:16.390 [Pipeline] // catchError 00:31:16.395 [Pipeline] } 00:31:16.413 [Pipeline] // wrap 00:31:16.418 [Pipeline] } 00:31:16.433 [Pipeline] // catchError 00:31:16.443 [Pipeline] stage 00:31:16.446 [Pipeline] { (Epilogue) 00:31:16.461 [Pipeline] catchError 00:31:16.463 [Pipeline] { 00:31:16.478 [Pipeline] echo 00:31:16.480 Cleanup processes 00:31:16.487 [Pipeline] sh 00:31:16.777 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.777 2538442 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:16.777 2538721 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.791 [Pipeline] sh 00:31:17.077 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:17.077 ++ grep -v 'sudo pgrep' 00:31:17.077 ++ awk '{print $1}' 00:31:17.077 + sudo kill -9 2538442 00:31:17.093 [Pipeline] sh 00:31:17.430 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:27.429 [Pipeline] sh 00:31:27.716 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:27.716 Artifacts sizes are good 00:31:27.730 [Pipeline] archiveArtifacts 00:31:27.738 Archiving artifacts 00:31:27.940 [Pipeline] sh 00:31:28.249 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:28.263 [Pipeline] cleanWs 00:31:28.274 [WS-CLEANUP] Deleting project workspace... 00:31:28.274 [WS-CLEANUP] Deferred wipeout is used... 00:31:28.280 [WS-CLEANUP] done 00:31:28.282 [Pipeline] } 00:31:28.301 [Pipeline] // catchError 00:31:28.311 [Pipeline] sh 00:31:28.594 + logger -p user.info -t JENKINS-CI 00:31:28.603 [Pipeline] } 00:31:28.619 [Pipeline] // stage 00:31:28.623 [Pipeline] } 00:31:28.638 [Pipeline] // node 00:31:28.643 [Pipeline] End of Pipeline 00:31:28.674 Finished: SUCCESS